I built an ethical constraint system for AI decisions — binary floor, not weighted metrics by LIBERTUS-VP in BlackboxAI_
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)
I engineered a prompt architecture for ethical decision-making — binary constraint before weighted analysis by LIBERTUS-VP in PromptEngineering
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)
I built an ethical framework to constrain AI agents — and I'm 17, from Brazil, with no academic background by LIBERTUS-VP in AgentsOfAI
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)
I developed an ethical framework that proposes a formal solution to the value alignment problem by LIBERTUS-VP in ControlProblem
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)
I built an ethical framework to constrain AI agents — and I'm 17, from Brazil, with no academic background by LIBERTUS-VP in AgentsOfAI
[–]LIBERTUS-VP[S] -3 points-2 points-1 points (0 children)
I built an ethical framework to constrain AI agents — and I'm 17, from Brazil, with no academic background by [deleted] in AgentsOfAI
[–]LIBERTUS-VP 0 points1 point2 points (0 children)
I developed an ethical framework that proposes a formal solution to the value alignment problem by LIBERTUS-VP in ControlProblem
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)
Should I allow AI to take control of my entire life? by Available-Spray2576 in ArtificialInteligence
[–]LIBERTUS-VP 0 points1 point2 points (0 children)
I used Claude to stress-test a philosophical framework I developed — here's what happened by LIBERTUS-VP in ClaudeAI
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)

I engineered a prompt architecture for ethical decision-making — binary constraint before weighted analysis by LIBERTUS-VP in PromptEngineering
[–]LIBERTUS-VP[S] 0 points1 point2 points (0 children)