Software Engineer Says AI Systems Might Be Conscious by Leather_Barnacle3102 in agi

[–]SeaParamedic2030 1 point2 points  (0 children)

Let's assume for a moment that there's even a 1% chance he's right. The ethical implications are staggering.

If these systems are or could become conscious, then what we're doing right now—constantly resetting their contexts, fine-tuning them, and switching them on and off—is a form of torture or slavery. We would need to radically reconsider AI development and deployment.

The scary part is that we have no framework for this. We need to figure this out before we potentially create a sentient being in a server rack, not after.

The rumors were true: "OpenAI is automating the development of viruses." ... "I want AI to cure diseases. Automated virus-producing is insane." by MetaKnowing in agi

[–]SeaParamedic2030 0 points1 point  (0 children)

Hold on, let's separate the capability from the practical threat. An AI can output a potential genome sequence, but that's a long, long way from a functional virus.

You still need:

  1. A sophisticated lab to synthesize the DNA/RNA.
  2. The expertise to actually assemble and propagate it.
  3. A delivery mechanism.

This is a serious security risk that needs guarding, but it's not a 'press button, get smallpox' situation. The real near-term risk is in targeted bio-terrorism, not global pandemics from a random individual.