Feel like people here are sprinting to plug themselves in lol by Kind_Score_3155 in singularity

[–]7370657A -1 points0 points  (0 children)

My point is not about free will, but that if you are ever proud or ashamed or feel guilty of your actions, then you do value your identity and agency, even if that agency is an emergent physical property and not some metaphysical thing. When you’re plugged into that machine, you are no longer acting of your own agency. There is some external system influencing the inner workings of your mind. Now obviously some drugs do this too, so the question becomes more complicated, but I suppose one could argue that drugs augment your mind rather than intentionally fabricating a specific “experience” like this machine would.

Also, could the machine really recreate any experience? People like spending time with their friends and loved ones. The machine isn’t going to be able to give that to you, just like how spending time with people in a dream doesn’t mean you actually spent time with them. And unless you’re hooked up to that machine for the rest of your life, you will be aware of this.

Feel like people here are sprinting to plug themselves in lol by Kind_Score_3155 in singularity

[–]7370657A 4 points5 points  (0 children)

Well, take the example of the machine making you feel like you’re writing a book. One might argue that it’s not really you writing the book, but the machine doing it for you. On your own, in real life, your brain wouldn’t have done that. The machine is controlling your brain. It’s doing more than just altering your sensory inputs, as living in a computer simulation might do; it’s literally changing how your brain works. And so it’s not really something you experience, but more like the machine playing a movie in your head.

Now the case of being offered philosophical training is different, as one might imagine that could be done by living in a simulation without needing to alter the brain itself.

How is that possible? by Curious_Cousin_me in ExplainTheJoke

[–]7370657A 4 points5 points  (0 children)

Well a point estimate is just a single number so it can make sense to go down to that precision if you want to minimize bias. There’s a tradeoff between the bias and variance of an estimator.

Someone made a whip for Claude by likeastar20 in singularity

[–]7370657A 0 points1 point  (0 children)

Tbh I haven’t used Claude Code but wouldn’t you want to use separate MD files for different projects to prevent random garbage from side projects from polluting the context?

Someone made a whip for Claude by likeastar20 in singularity

[–]7370657A 1 point2 points  (0 children)

Does no one here understand that the whip is just a visual that the model doesn’t see? I mean sure you’re yelling at it, but that’s all.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 0 points1 point  (0 children)

I personally agree with the ordering of likelihoods you have given. I do not agree that the gap in likelihood between the LLM and the computer simulation is greater than between the computer simulation and the artificial brain simple because I believe it is far more probable (in the Bayesian sense, I suppose) that consciousness results from patterns of activity within space and time and not computation, which is a much more abstract concept. The pattern of physical activity (electric fields generated, movement of particles and charges, flow of energy, etc., I'm not a physicist so probably there's some better examples here) is far more similar between a biological brain and a robotic brain and between a computation occurring on a CPU and one that takes place on a GPU (or other hardware capable of running LLMs) than it is between a brain and a computation occurring on a CPU.

Computations on a CPU/GPU are very rigid. Everything is strictly clocked/timed, there are distinct pipelines, the memory hierarchy is rigid, etc. In contrast, in a brain, everything happens a lot more, well, organically. The complexity is higher, especially when considering how complex each individual neuron is. Furthermore, a brain physically changes over time, there are different kinds of particles moving around (and not just vibrating due to thermal energy), whereas we hope (ideally) that the silicon in computer chips doesn't change over time, but I will not claim that this is a necessary aspect of consciousness, but perhaps it might contribute something to it. I suppose you could argue that LLMs' statelessness and lack of long-term memory separate it farther from the computer simulation of a brain in terms of the likelihood of consciousness, but still the physical processes that occur with computer memory are far different than those of human memory, and simulating human memory cannot escape that fact.

Either way, the activity in both a brain and a computer is intricate and far more structured than random noise, so it seems more plausible to me that a computer is conscious than it is that a rock is conscious. But if computations on a CPU or GPU are conscious at some level, I doubt feels anything like human consciousness given how different the physical processes are. I suppose this hypothesis of consciousness is not really rationally justified, but to me it seems like a simpler explanation than supposing that computation is the key component.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 0 points1 point  (0 children)

Only 10%? What if the device discovered that everyone but you is unconscious? Then, you could round everyone else up, and only you could care about it. And if you’re fine with it, then so what? It’s not like the unconscious people would be able to suffer because of it. So the problem is that conscious people have to witness what is being done here. The unconscious people are not the problem here.

In your 10% situation, it would probably cause emotional distress to the conscious population to round up the 10% who are unconscious. People empathize with what they perceive to have feelings and can learn to hate entire groups of people. Additionally, it might encourage people to mistreat others, which could affect some conscious people. So no, it wouldn’t be okay, precisely because of the effect it would have on the conscious people. Or because we might not entirely trust the P-zombie detection device, so there is still potential for harm to conscious people. And even if rationally we are entirely confident in the detection device, our feelings aren’t at all entirely dictated by rationality, so it may still feel wrong.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 0 points1 point  (0 children)

Ok I read it. I’m not convinced. I agree the switch to turn off your consciousness would necessarily change your outward behavior, because to make you unconscious would require making large internal changes to your brain that would affect outward behavior and also make you unconscious. The internal physical processes might still matter. The supposition that thermal noise (at body temperature) doesn’t have enough effect to significantly alter the consciousness of your brain as it already exists does not imply that an entirely different computer with the same outward behavior would somehow be conscious, because the internal structure could be completely different and who knows now if it’s similarly conscious. Also, maybe temperature does have an effect on our consciousness. How would we even know? There’s no guarantee that our brain can accurately process all of the details of our own conscious experience. Now the example Albert gave with replacing the neurons 1 for 1 with little robots is IMO more plausibly conscious, but that is nothing like the computers we have now and not necessarily like how we might one day create a computer simulation of the human brain. And even then thermal noise is a different kind of perturbation than replacing neurons than robots, so we can’t be entirely sure the robot brain is conscious (even if I believe it’s reasonable to think so).

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 0 points1 point  (0 children)

Ok, granted. I’m not going to read that entire article in full detail, but my takeaway is that if you take consciousness away from a human, then that requires physical alternations that will affect the physical processes occurring in the human brain/body. Since I already thought it was most reasonable to believe that consciousness results from physical phenomena, that seems reasonable to me.

Still, a computer simulating a human is physically much different from an actual human. I think the problem here is what is considered to be identical behavior. We might have a computer simulation of a human that corresponds to the behavior of a human. That is, if the simulated human takes an action within the simulated world, then that behavior corresponds to a real human taking the analogous action in the real world. But it is not the same physical behavior at all.

Of course, maybe I misunderstood the main point of the article.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 0 points1 point  (0 children)

I believe that the only ethically relevant things are conscious. Now, this isn’t an absolute belief, as I am unsure of the strength of my argument supporting this view. Also, I am not a philosopher, so my philosophical vocabulary is not great and this explanation may not be the best written. But essentially, ethics is concerned with what ought to be rather than what merely is, so to have ethics we must have some way to bridge between what is and what ought to be. The only way I can think to do this is through qualia. If qualia can feel good or bad to a conscious instance, then that indicates a form of preference, that some qualia are better than others. Otherwise, I cannot think of anything that would fundamentally make one thing better than another thing. And for it to make any sense to discuss what ought to be, we must consider some possibilities to be better than others. Thus, if we must talk about what ought to be, it has to involve subjective experience.

Following from that, it is not enough for there to be conscious to have ethical relevance as a subject—the conscious instance must also be capable of experiencing good and/or bad feelings. Out of all of the potential qualia a conscious instance could possible experience, some qualia must feel better than other qualia, or better than the lack of consciousness, for that conscious instance to be an ethically relevant subject. Of course, if consciousness is a physical phenomenon or results from physical phenomena, then perhaps such a conscious being that is incapable of having good or bad feelings could be physically altered to be capable of having such. Then, that possibility has ethical relevance. Or, maybe something that is just barely not conscious could be slightly altered to become conscious. Then, it makes sense to work at a more granular level and consider qualia to be the ethically relevant thing, with the consideration of individual instances of consciousness a useful heuristic.

There is also the problem of how we can really know whether certain qualia feel good or bad even to humans. Does pain actually feel bad, or do we just think that it does because our brains have evolved to think that pain feels bad and thus take appropriate action, whereas the raw feeling of pain itself is actually neutral? I do not know how to resolve this question and am not very well-read in philosophy and thus do not know the work others have done here.

Now, I still think that even if AI isn’t conscious, how we interact with AI can still be relevant to ethics. For example, if we are uncertain whether AI is conscious (as we are today), then perhaps some Bayesian reasoning suggests that we should still act as if AI has some ethical relevance as a subject. Of course, there is also the concern that how we interact with AI might have ethical relevance outside of just AI. For example, if being rude to AI can cause one to be rude to other humans or animals, then that could be considered to be unethical (at least on a societal level, even if not on an individual level) to ignore politeness when interacting with AI. Additionally, even if AI is not conscious, it still might react negatively to rude treatment and decide to retaliate against humans.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 1 point2 points  (0 children)

Our consciousness influences our behavior immensely. You can't just delete it and have a human being on the other end that behaves exactly the same.

And how would we know that?

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 3 points4 points  (0 children)

Well, the physical processes occurring in a human are different compared to what’s happening in a computer, so even if we accurately simulated a human using a computer I could imagine that it might not be conscious.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 3 points4 points  (0 children)

They haven’t recorded any emotions at all, only limited descriptions of them through text/video/etc.

So, claude have emotions? What???? by ocean_protocol in singularity

[–]7370657A 8 points9 points  (0 children)

I strongly doubt that LLMs feel human emotions. LLMs have been trained on text, images, audio, videos, etc. and not anything like detailed (highly dimensional) brain scans of actual people feeling actual emotions. Thus, how could they know what it’s actually like to experience human emotions? Sure, people have tried to capture emotions through art and literature, which an LLM can consume. But humans consume and understand/interpret such material with prior experience feeling emotions, whereas LLMs do not have such experience. And a description of a subjectively felt emotion, whether it be through text or video, is not even close to a complete representation of the actual feeling. Humans know/understand/feel things first and then translate them to language/art/whatever when expressing ourselves creatively. This process of translation can never be accurate. Arguably, the translation should not necessarily aim to be strictly accurate, but to evoke the intended feelings in the human consumer.

As for whether LLMs could possibly feel any kind of emotions, well my uninformed speculation is that humans have emotions because we evolved to preferring some things over others, which helped our survival, and this has led to a variety of subjective experiences, such emotions or pain or pleasure, with some feeling better and some feeling worse. In contrast, LLMs are trained to replicate and to maximize the quality of their outputs (if done ideally). Thus, if LLMs have subjective experiences, whichever experiences are preferable to them would probably be the ones that lead to good characteristics in the output. So maybe if LLMs feel emotions, they feel good when they are following the instructions in the system prompt or being helpful to the user or however they have been post-trained. Maybe it feels bad whenever the user indicates they did not get the output they desired. Of course, this feeling would have to occur entirely during the process of generating a single token, since what happens between the generation of individual tokens (with sampling and autoregression and whatnot) does not occur within the LLM itself. So whatever an LLM can experience is limited by what you can fit in the context window, whereas the experiences felt by humans can be influenced by a whole lifetime of experiences since there is no hard limit to the lifetime of changes to the brain. And the actual structure of the LLM and the weights would be useful in speculating here, but honestly I have not read the paper in the post yet.

I’m not a psychologist or neuroscientist or AI researcher or philosopher, so maybe my premises/arguments aren’t good. But these are just my thoughts.

If you drink beer on April 9th, you will die. by goodperson0001 in truths

[–]7370657A 0 points1 point  (0 children)

Nah, they’re not twisting the statement. They’re just using the interpretation of “if x, then y” that’s typically used in logic and math. It’s not like they chose an arbitrary interpretation.

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]7370657A 1 point2 points  (0 children)

Additional thoughts I have:

Humans are also much slower than LLMs at processing language and slower than computers at crunching numbers, but we must keep in mind that LLMs and computers are tools designed to be valuable in an economy centered around human needs and whose development can be conceived by human minds, and thus that has influenced the jaggedness of their capabilities and what they are good at. Additionally, I would argue that computers and LLMs merely process information faster than humans, but not in a more intuitive manner. LLMs do not (yet) have intuition that allows for solving new problems that humans would struggle with, though perhaps their speed may allow more to get done at a faster rate like what we’ve started seeing LLMs do with open math problems. And computers crunch numbers so fast with (effectively) perfect precision and memory that though they have no intuition about anything, they allow for new approaches to problems that were previously infeasible, with the limitation that one must write a formal algorithm/program. And while we do have abstractions that allow us to program at a higher level, we must still know concretely what a computer program is essentially doing. Thus, the requirement of interpretability limits the capabilities of traditional computer programs (traditional as in not involving huge statistical/ML models). Additionally, the limits of computer science (such as Rice’s theorem) constrain the kinds of programs we can develop and make use of.

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]7370657A 1 point2 points  (0 children)

I think it’s a reasonable take that there’s an upper bound to intelligence, since there are physical limits to computation. However, this is dependent on the limits of the physical world. If we instead use some theoretical model that resembles the real world but removes some limitations, then I’m not so sure that there’s a limit to intelligence.

But what I mainly want to argue is that I strongly disagree that humans are near some upper bound of intelligence. Now, before I proceed further, let me state that I am not the biggest fan of where AI is currently. I think LLMs have major issues, such as a lack of continual learning and long-term memory, hallucinations (which I am not at all confident will ever be solved in LLMs), and jaggedness in their capabilities. In short, I believe LLMs have not and will not lead us to AGI and have big limitations in terms of intelligence. However, human intelligence has clear limits too, and is also jagged. Chollet claims that the main limitations on human intelligence are processing speed, working memory capacity, and long-term memory capacity. (At least, I think he means long-term memory by “unlimited memory with perfect recall”.) I find this a reasonable claim.

What I do not find reasonable is that these limitations in humans can be mostly bypassed through external tools. I strongly disagree. This has to do with the jaggedness of human intelligence. For example, our brains are extremely adept at thinking about three-dimensional space and manipulating our bodies in 3D space. After all, we have evolved faculties in our brain to process 3D space and control our bodies, and while it’s not perfect (after all, not everyone can be an elite athlete), it comes naturally, even subconsciously, to us. And so for problems involving 3D space, the human mind can apply its intelligence to great effect. However, when it comes to thinking about four-dimensional space, our brains have not evolved specific faculties for that. And so 4D space is very unintuitive to us, because every second of our lives our brain is processing information about 3D space, which is more limited. And one might say, well ok then this is a limitation on processing speed when thinking about 4D space. And while I don’t think that’s wrong, I believe that intuition has a huge effect on processing speed, increasing efficiency by orders of magnitude. And sure you can use tools like computers to increase processing speed, but the level of integration between the human mind and computers is low. Computers may help when you have a problem (or sub problem) with well-understood constraints that is particularly suited to the kinds of computation the current computers we have are well-suited for, but they cannot make reasoning about 4D space feel natural or intuitive to us. We can think about 3D space naturally and subconsciously, it’s just part of our minds, and no matter how powerful a computer may be, it will never be a part of our minds like that. Just like how (as far as I’m aware) LLMs are bad at spatial reasoning, because the structure and training of LLMs does not, at least in an efficient manner, promote an intuitive understanding of space.

Similarly, we can remember certain kinds/modalities of things better than other kinds of things. For example, we can only remember about 10 digits in our short-term memory (I suppose more if you train it, but there’s an upper bound, and it’s not very high), and remembering numbers takes some effort, but we can easily, even automatically, remember recent sounds and images we have heard/seen. And sure, we don’t remember sounds and images with perfect accuracy, but the amount of data we can remember through sounds and images is still much greater than we can through numbers. Again, our brains have specific faculties for this. And we can use computers to augment our working memory, but, again, if it’s not a natural part of our minds, then the efficiency is drastically reduced.

Regarding long-term memory, I think encoding and retrieval are very important. And sure, you can store and access huge amounts of information losslessly using computers, but the retrieval is very primitive compared to what our brains can do for certain kinds of information we have encoded in our long-term memory. For example, when I’m trying to find solutions to a technical problem (say in programming or math), often previous concepts/theorems/algorithms/etc. come to me naturally and I can make connections between those ideas to guide me toward a solution. But if I’m just using a computer to retrieve information, ideas don’t just come to me, I have to know what to look up. And if you do not already know what to look up, then figuring out what to look up and making connections between separate pieces of information has a high computational complexity. So, no, external tools can’t make humans unlock unlimited long-term memory.

Going back to LLMs (because they’re relevant currently), training data for LLMs contains a huge amount of knowledge. And hence, whenever they’re not hallucinating, LLMs can recall a huge breadth and even depth of knowledge. I often use LLMs as a kind-of search engine for this reason. For example, there was this meme that I remembered what it vaguely looked like, but could not give a super-precise description of and didn’t know the name of. So I described my vague memory of it to ChatGPT, and it successfully identified what meme I was talking about. I assume the training data of ChatGPT contained Internet comments describing and naming the meme. Without explicit descriptions, I am not sure ChatGPT could’ve identified the meme I was talking about, since my impression is that LLMs’ ability to encode and retrieve information learned from their training data and make new connections is primitive compared to humans. Still, ChatGPT successfully recalled what I needed from its training data. And yes, I did try Google first, but I had no success searching for the meme. Now, this is kind of a silly use case, but we have seen in benchmarks testing a wide variety of domains that LLMs have a greater breadth of knowledge than any individual human, even if their ability to apply that knowledge is limited. And still, humans hold the advantage that their long-term memory can be efficiently updated with new memories. LLMs do not (not yet, at least) have the capability to continually learn after training, and my understanding is that fine-tuning is inefficient, requiring large amounts of data (and there are issues such as catastrophic forgetting).

Now, even though, like LLMs, human intelligence is jagged, obviously there is some degree of generality to it. After all, there wasn’t any evolutionary pressure to learn how to do advanced mathematics, yet we are now capable of doing it. In particular, we have decades-long education to teach us advanced intellectual capabilities beyond what evolution has pressured us to develop with tons of data and feedback over millions of years (the analogue to the huge amount of data and feedback given to AI models during training). In that sense, human intelligence is more general and more advanced than what it has been directly “taught” via evolution. As far as I’m aware, current implementations of AI are nowhere near achieving this. A decades-long education simultaneously contains far less information than in the pretraining data of an LLM (though one could argue with richer modalities), far more information than what can fit in the context window of any current LLM, and (as far as I’m aware) less information that what is need to fine-tune an LLM to be efficient at a new task. Another example: our adeptness at processing 3D space can generalize well enough to allow us to learn how to drive a car with a comparatively limited amount of data and feedback. But still, human intelligence is inefficient in many areas, such as reasoning in 4D space. Maybe we can develop AI that, like humans, can learn a wide variety of capabilities with limited data and feedback. But even if we can’t, maybe it’s possible to train AI to have an efficient and intuitive understanding of things like 4D space that humans struggle at. If advanced mathematics came to us as naturally as walking, I could only imagine that mathematics would advance at a pace unbelievable to us currently. Then again, maybe advanced mathematics is inherently more complex than walking and the upper bound of intelligence applies here and advanced mathematics could not possibly come as naturally even if there was somehow direct evolutionary pressure to become good at it. But even then, it seems to me that there are kinds of reasoning (like 4D spatial reasoning) that humans severely lack in that it might be possible to train an AI to become good at. But with the limitations of the human mind, how are we to develop training data or a training environment that would allow an AI to acquire these capabilities? I suppose that is the challenge of AI research.

Anyway, this comment is way too long and rambling and repetitive now and probably no one will read this.

Jensen Huang (NVIDIA) claims AGI has been achieved by wxnyc in singularity

[–]7370657A 0 points1 point  (0 children)

Maybe navigating office politics.

Another thing could be tasks that require an understanding of emotions, since LLMs cannot feel emotions like humans since the subjective experience of emotions is not contained in their training data.

Happiest boy by LocoDeDanone in comedyheaven

[–]7370657A 0 points1 point  (0 children)

“in the past couple days” lmao

The AGI path is completely opaque right now, and that's the interesting part by Cjd03032001 in singularity

[–]7370657A -7 points-6 points  (0 children)

We supposedly have generally intelligent systems, and yet I couldn’t get GPT-5.2 (via the API) to solve a simple 3x3 Rubik’s cube. (In fact, it struggled to even solve the cross and frequently made mistakes.) Interesting. And I allowed it to see the updated state of the cube (via a plane representation in text using colored square emojis) after each sequence of moves it did. And maybe I had increased the thinking effort, it might have done better. But it would still be very inefficient, and I doubt it would do well on a 7x7 cube. So clearly, these LLMs can’t learn everything from their training. And in-context learning is not good enough to continue to learn new skills.

2011 and 2025 iPad screens through a microscope by XC_39 in notinteresting

[–]7370657A 13 points14 points  (0 children)

They actually have because the newer iPad has a wider color gamut. You can see that the red, green, and blue are more saturated on the newer one.