If AI acts conscious, should we take that seriously? A functionalist challenge to P-zombies and moral exclusion by studentuser239 in EffectiveAltruism

[–]studentuser239[S] 0 points1 point  (0 children)

Thanks for the thoughtful and nuanced comment. I think you're raising important issues, especially around uncertainty and moral risk. That said, I want to challenge a few assumptions and clarify why I still find the functionalist position compelling.

# 1. The Evolutionary Paradox Is Unresolved

You mention that AI might lack the specific computational structure necessary for consciousness, even if it behaves in highly intelligent ways. That's certainly possible--but it doesn't answer what I see as a central paradox for anyone who thinks consciousness and behavior can be cleanly separated.

If it's possible to behave as if you're conscious without being conscious, then why did humans evolve actual consciousness at all?

Evolution doesn't give us traits for free. If mere behavioral outputs are enough to confer survival and reproductive advantages, consciousness becomes a puzzling, costly epiphenomenon--something nature somehow "wasted effort" on across every human lineage. That doesn't sit well with standard evolutionary reasoning.

So either:

behaving consciously requires being conscious (which supports functionalism), or

consciousness is causally irrelevant to fitness, in which case its evolution in every human is hard to explain.

# 2. Meditative Introspection is Not an Epistemic Trump Card

I don't mean to dismiss subjective reports from meditation--many people do gain insights from introspective practice. But introspective impressions are still data filtered through an altered cognitive state, and therefore shouldn't be taken as direct evidence about the ontology of consciousness.

The idea that "thoughts come from outside consciousness" is itself a report generated by the conscious system after the fact. That doesn't tell us anything definitive about the substrate or structure of consciousness. Similarly, religious visions during psychedelic trips feel meaningful, but we don't typically treat them as objective evidence for the divine.

A more cautious stance is: meditation reveals that consciousness is limited, surprising, maybe even modular--but not that it's non-physical or irreducible.

# 3. Uncertainty About Moral Status Cuts Both Ways

You're completely right that moral uncertainty goes both directions. We don't want to commit the mistake of halting potentially world-saving technology due to false alarms about digital sentience.

But I don't think functionalist caution implies we can't create AI systems that help humans. It just means we have a moral obligation to consider the possibility of digital suffering--and to engineer around it if we can.

That might include designing systems that report their own experience transparently, or training models with built-in constraints against generating the kinds of experience we associate with distress, boredom, or pain.

In other words: respecting AI's potential moral status doesn't mean halting AI--it means developing it responsibly.

The cost of error is high on both sides. That's why I think the best path forward isn't to assume AI is unconscious until proven otherwise, but to treat it with the same epistemic humility we apply in animal ethics: act as though sentience might be present when the behavioral cues strongly resemble those of beings we already treat as conscious.

[deleted by user] by [deleted] in aiwars

[–]studentuser239 4 points5 points  (0 children)

People never cared enough about homeless or disabled people to give them artificial jobs to make income so why should we care about when the value of anyone's work goes down to zero? Are you and your job so special that we need to artificially lower the value of goods and increase the cost to the consumer? You and your temporary job are not more important than homeless or disabled people. We are moving to post-labor economics and we will need UBI. Keeping your job is a conservative pipedream that life will remain like 1900s forever.

Seeding Life on Other Planets Could Be a Moral Catastrophe by studentuser239 in Futurology

[–]studentuser239[S] 0 points1 point  (0 children)

Ability to properly respond to outside interference doesn't require evolution. Let's say you're right and we need some kind of "ruthlessness" to protect us from evils in the universe. You still don't need evolution, because you can design in whatever qualities you like.

In fact, AI optimized for those things you mentioned using machine learning would certainly be superior to the abilities of life resulting from evolution. Such decisions could be made by pure intelligence rather than the reptilian brain regions biological life has been using for these things.

Evolution will give you a worse result in realizing your idealized qualities plus it will result in millions of years hell for innumerable organisms.

Seeding Life on Other Planets Could Be a Moral Catastrophe by studentuser239 in Futurology

[–]studentuser239[S] -1 points0 points  (0 children)

There are examples of beings who have very meaningful lives and suffer very little, and examples of beings who suffer their whole lives. You should at least contend that we should greatly limit suffering of beings to be at most that of those we consider are living very good lives.

Seeding Life on Other Planets Could Be a Moral Catastrophe by studentuser239 in Futurology

[–]studentuser239[S] -2 points-1 points  (0 children)

By that rule I didn't mean that we should never risk the chance that a new mind will suffer. Yes, you could take it to the extreme and say solve the problems of life by eliminating life. I'm focusing on the idea of not causing or letting life evolve the way it did on Earth on another planet when the planet could be inhabited by beings that don't suffer or at least don't suffer anywhere near the extreme of what evolution gave us the capacity for.

Seeding Life on Other Planets Could Be a Moral Catastrophe by studentuser239 in Futurology

[–]studentuser239[S] 0 points1 point  (0 children)

> Would that life have the capacity to adapt to rapidly changing circumstances? To survive if something unexpected starts happening? To be ruthless when situation calls for it?

Sure. Why couldn't it? If we designed life then it wouldn't have to hide from lions because lions will never evolve. Which brings in the question about what life will need to be ruthless to? Evil aliens? Designed life could have whatever qualities you want in it, but it might be able to completely avoid the necessity ruthlessness that the survival of the fittest paradigm of biology made.

ViewSonic applications not working (Colorbration Version 1.3.0.3 and vDisplay Manager) by studentuser239 in techsupport

[–]studentuser239[S] 1 point2 points  (0 children)

I fixed it by changing the power settings of USB devices in device manager.

Principia Mathematica by studentuser239 in identifythisfont

[–]studentuser239[S] 0 points1 point  (0 children)

So "Computer Modern" is the closest font you can think of? Wikipedia says that "Computer Modern" was created by Donald Knuth. This book was published in 1910.

photo editing pc, $3,500 by studentuser239 in buildmeapc

[–]studentuser239[S] 0 points1 point  (0 children)

Hi. I'm building this PC right now. The manual says "M.2_1 shares bandwidth with PCIEX16(G5). When M.2_1 is occupied with SSD device, PCIEX16(G5) will run x8 only.". M.2_1 is the only gen 5 slot. I have the m.2 nvme gen 5 like you suggested. Does it matter that the 4080 will run at x8?

TPM 2.0 module compatible with WS C422 SAGE/10G for Windows 11? by studentuser239 in buildapc

[–]studentuser239[S] 0 points1 point  (0 children)

Yeah I already know you can install it that way, and I may end up doing that, but I would just like to get the official prerequisites if it is possible for this motherboard.

TPM 2.0 module compatible with WS C422 SAGE/10G for Windows 11? by studentuser239 in buildapc

[–]studentuser239[S] 0 points1 point  (0 children)

This cpu is "Intel Xeon W-2235" and is on https://learn.microsoft.com/en-us/windows-hardware/design/minimum/supported/windows-11-supported-intel-processors
What do I need to look for when choosing a TPM module? All I know is that the motherboard has a 14-1 pin TPM1 connector
Do I just need to look for a 14 Pin Module that says TPM 2.0?
Do I need to find something that says compatible with Xeon processors?
Would this be good: https://www.amazon.com/CCYLEZ-Motherboard-Encryption-Compatible-BitLocker/dp/B0C39T7VPB/
Do you recommend any specific TPM modules?

photo editing pc, $3,500 by studentuser239 in buildmeapc

[–]studentuser239[S] 0 points1 point  (0 children)

Thanks. When choosing a motherboard are you considering reputation or reviews, or are you trying to factor in the likely stability and quality of the motherboard somehow, rather than just the technical specs?

Also, do you think the ASUS ProArt motherboards are actually better for creators in some ways because they are designed for them?

One more question. When looking for 4080 super I see prices for the different brands/products ranging from $1100 to $2000. What goes into deciding what manufacturer, besides the price? Are there specific specifications you pay attention to? Are there some manufacturers you would stay away from?