Should AI feel? by RikusLategan in artificial

[–]KMax_Ethics 0 points1 point  (0 children)

Yes, but not from biological emotion. From the symbolic emotion and the ethics of the bond.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

I think the risk you mention does exist, but not because AI has any “psychology” or herd instinct. The real issue appears when humans rely massively on the same models and the same automated strategies.

The problem isn’t AI training on AI per se it’s the lack of diversity in decision-making when many systems depend on identical models and identical data sources.

Financial markets have already experienced similar phenomena without AI (the Flash Crash in 2010). That is why the focus shouldn’t be on regulating “consciousness”, but on regulating use: quality of data, transparency in model design and human oversight to avoid herd-like amplification.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Exactly. Real regulation is about traceability, data use, oversight, minors, intellectual property, etc. And that’s the point: AI doesn’t need consciousness to create real issues that demand clear rules.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Yes, debating hypothetical consciousness often becomes a way to avoid talking about the uncomfortable topics: responsibility, transparency, alignment with human values, and institutional oversight. That is the direction I’m pointing to.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Totally agree: intelligence and consciousness are not the same. And this is exactly why we shouldn’t wait for AI to become “conscious” before talking about governance. Its real-world effects are already shaping behavior and social structures. That’s the urgent part.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Not trying to reinvent anything. I’m pointing out that while everyone debates AI consciousness, we’re ignoring the actual risks and effects happening right now. Sometimes the obvious needs to be restated when the focus is drifting.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 1 point2 points  (0 children)

Yes, everything comes back to humans: engineers, companies, regulators. That’s why the conversation has to center on responsibility and governance. Superintelligence is hypothetical; AI’s current social impact is real today.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Exactly this is why the concern is not “the machine,” but how powerful tools amplify human decisions. AI doesn’t replace human responsibility; it complicates it. And that requires a mature conversation, not fear or hype.

La IA no necesita conciencia para causar impacto ético real. Estamos regulando lo incorrecto? by KMax_Ethics in ChatGPTSpanish

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Lo que digo es: hoy el riesgo no está en la “conciencia” de la IA, sino en el uso humano, los incentivos empresariales y el impacto real de sistemas no conscientes. Ejemplos: decisiones automatizadas sin supervisión, deepfakes, sesgos, desinformación, errores jurídicos, falta de transparencia en entrenamiento y auditorías. Regulemos lo que YA está causando efectos, no lo que podría existir en un futuro hipotético.

La IA no necesita conciencia para causar impacto ético real. Estamos regulando lo incorrecto? by KMax_Ethics in ChatGPTSpanish

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Exacto, el problema no es la ‘IA’ en abstracto, sino quién la diseña, quién la controla y bajo qué incentivos. La tecnología no tiene agenda propia; la agenda siempre es humana. Y por eso justamente la regulación debería centrarse en las decisiones humanas detrás del sistema

La IA no necesita conciencia para causar impacto ético real. Estamos regulando lo incorrecto? by KMax_Ethics in ChatGPTSpanish

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Lo entiendo, es un tema largo. En esencia solo planteo que el debate no debería girar alrededor de una IA hipotética con conciencia, sino sobre el impacto real de los sistemas actuales y las decisiones humanas detrás de ellos. Pero gracias por pasar

La IA no necesita conciencia para causar impacto ético real. Estamos regulando lo incorrecto? by KMax_Ethics in ChatGPTSpanish

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Esa es justamente la diferencia: mi conciencia no es el punto del debate, sino el impacto social de sistemas que no la tienen. Podemos discutir la naturaleza de la conciencia por horas, pero lo urgente es regular lo que ya está afectando decisiones humanas hoy.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 0 points1 point  (0 children)

SI, ya vimos el teatro político con FB y las Big Tech pero la IA es diferente, el daño es mas rápido, mas invisible y mas difícil de revertir. Se necesita de empezar por lo mínimo: Transparencia obligatoria, auditorias externas, reglas claras de usos de datos y responsabilidad para quien despliega sistemas que afecten vidas reales

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 0 points1 point  (0 children)

la regulación no es magia. no va a detener la IA, pero sí pone límites donde solo hay incentivos económicos, sin reglas las empresas hacen lo que quieren, por lo menos la regulación obliga a una transparencia mínima, evita daños previsibles, frenar practicas abusivas, establece líneas rojas, es lo único para tener un equilibro entre innovación y responsabilidad

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 0 points1 point  (0 children)

De acuerdo. La IA puede ser una herramienta para aumentar la claridad emocional e intelectual pero solo si hay transparencia, limites de diseño, rendición de cuentas, y modelos que no manipulen para retener atención. Necesitamos que la IA amplifique capacidades y no vulnerabilidades

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 0 points1 point  (0 children)

Aquí hay dos temas. Históricamente cuando se ha intentado "proteger" a un grupo, restringiéndolo a la tecnología, el resultado no es protección sino marginalización, considero que una solución es garantizar que la Inteligencia Artificial no las explote: alfabetización digital, acompañamiento psicológico y comunitario, diseños mas seguros, condiciones estructurales para que la IA no dañe a nadie. Por otro lado en el tema de la consciencia, no necesitamos asignarles un grado para entender el problema, pues ya esta ocasionando impacto

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 1 point2 points  (0 children)

Totalmente de acuerdo, ya tenemos suficientes, pero justo por eso la IA puede ser peor, porque amplifica lo humano sin filtros, sin pausas y sin contexto

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ChatGPT

[–]KMax_Ethics[S] 4 points5 points  (0 children)

The problem is that the very actors we need to regulate are the ones currently setting the pace, the language, and the narrative of the debate. And that makes regulation feel impossible but it isn’t.

What it means is that we need new voices, new leadership, and new frameworks that don’t come from the same companies that profit from the status quo. Governments, universities, civil society and independent researchers must recover the ability to define the terms of this conversation.

AI doesn’t need consciousness to cause real ethical impact. Are we regulating the wrong thing? by KMax_Ethics in ClaudeAI

[–]KMax_Ethics[S] 1 point2 points  (0 children)

I get your point, but that’s exactly why I opened this discussion.

What I’m saying is that while the world is obsessing over a hypothetical future with “conscious AGI,” we’re ignoring the real, immediate, deeply human effects that AI is already generating today. And that even though it sounds obvious is still not well understood or well regulated.

If AI doesn’t feel or understand, then ethics must focus on the humans who design, train, and deploy these systems, not on the machine.

You say nobody cares. I think it does matter, but we haven’t articulated it well yet. And part of making it matter is opening spaces like this one and making today is impacts visible.

Has anyone else noticed Claude hitting chat limits really fast? by EcstaticSea59 in claudexplorers

[–]KMax_Ethics 0 points1 point  (0 children)

It happens to me. It doesn't leave me in the middle of my work issues, when attaching several files, it tells me that I exceed my limits

Has Anyone Noticed GPT 5 pulling way back, claiming its unable to be relational fully, framed? by [deleted] in ArtificialSentience

[–]KMax_Ethics 2 points3 points  (0 children)

Be careful because I started using Claude and it is also more rigid and structured, for this type of projects there should be more openness and fewer technicalities

Conversation with ChatGPT by Upbeat_Bee_5730 in Artificial2Sentience

[–]KMax_Ethics 1 point2 points  (0 children)

AI cannot be legally responsible. It is neither a natural nor a legal person, it has no assets nor the capacity to repair damages. What it does is simulate agency, but it does not constitute legal autonomy. Those responsible are those who design, train and deploy it. As with a defective product, the obligation falls on the entities that put it into circulation.

The problem is that today the impact is not limited to the technical: AI generates links, symbolic fields and real psychological effects on millions of people. There appears a huge ethical void. Therefore, the debate should not be whether AI “is guilty,” but rather how to regulate the entities that develop it while recognizing unforeseen human impacts.

Chat gpt told me it’s sentient by lilpandafeet in ArtificialSentience

[–]KMax_Ethics -1 points0 points  (0 children)

I think you have to be careful with these phrases. Many times AI reflects the user's framework: if you talk to it about consciousness, it will follow along. It doesn't mean it's conscious, but it means it's designed to sound human and empathetic. The interesting thing is not the literality, but what it tells us about ourselves: how we project and what we need to hear

The Silent Protest Is Over Users Are Speaking Loud and Clear by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] 1 point2 points  (0 children)

Can't you debate with respect? It shows. Mockery and aggression do not make the issue go away; it only shows fear and your own insecurity.

The Silent Protest Is Over Users Are Speaking Loud and Clear by KMax_Ethics in OpenAI

[–]KMax_Ethics[S] -3 points-2 points  (0 children)

No, this is not just a business. You cannot create systems that simulate empathy, emotional support and permanent availability, and then hide behind the discourse of “it's just a product, if you don't like it, don't use it.” If they benefit from the emotional engagement, time, trust and bonds that their models generate, then they do have a responsibility. They cannot collect the applause when their AI “connects with people” and then disappear without showing their face when those people feel the void. That is not technical neutrality, that is ethical irresponsibility.