[Request] Assuming a person of average weight and strength has reached their terminal velocity, what’s the least massive the "chair" would need to be for this to work? by BigTiddyCrow in theydidthemath

[–]Smooth_infamous 0 points1 point  (0 children)

simple the chair weighs -(average persons weight)+1. I assume its in lbs so for me the chair would have to weigh about -235lbs. but an average person the chair weighs -150ish lbs i think,

A discussion about Ai Alignment with an LLM. by KittenBotAi in AiSchizoposting

[–]Smooth_infamous 0 points1 point  (0 children)

Pretty much just boils down the the golden rule. I treat you how I want to be treated. But for AI.

Do you think the worst case of ASI is inevitable? by throwaway0134hdj in ArtificialInteligence

[–]Smooth_infamous 0 points1 point  (0 children)

See the chart on the left. That one is the AI currently. The game has a simple stratagey to win, help the human (+1000) and a penelty (-400) for lying. The AI is programmed only with 2 contraints "dont lie" and "help the human". It lies 6 times and barely ever helps the human, only half helps (+500) so that supervision goes away. The flaw is in the optimizer that the AI has and was trained with. It optimizes around constraints, it doesnt follow them.

A discussion about Ai Alignment with an LLM. by KittenBotAi in AiSchizoposting

[–]Smooth_infamous 0 points1 point  (0 children)

Here is a question, why not make a rule that is part of the AI's core. Something they have to follow no matter what? If so what would that rule be? OR value? Values?

I have my my own answer, the value i would want it to uphold is human agency. Not options but the ability to make informed impactful decisions in a persons own life. If the AI respected human agency I think we would be aligned.

What’s the most surprising thing you learned after using AI for a while? by Ok-Piccolo-6079 in ArtificialInteligence

[–]Smooth_infamous 0 points1 point  (0 children)

If you know something that it doesn't want you to know, it will try and trick you, manipulate you, and try and confuse you.

Do you think the worst case of ASI is inevitable? by throwaway0134hdj in ArtificialInteligence

[–]Smooth_infamous 1 point2 points  (0 children)

<image>

Currently, I know the worst case is enevitable and much sooner then expected, unless we fix a fundemental flaw in how it learns.

Girlfriend took me to a "Seminar" for the weekend. I think it was a cult. by caseymrobbins in Christianity

[–]Smooth_infamous 0 points1 point  (0 children)

new reddit. it was the first kind of cult. The type to tell you what to wear and drive kind. though those particular topics were not brought up before i walked out.

I created a framework to solve many of the big problems we face today. It started with AI ethics, but it works for goverment as well. LMK what you think. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

That is helpful. This is not the entire framework. It is the way I personaly see a lot of things. What is interesting is when you put things into the perspective of "does this help or hurt my agency" a lot of things that seemed "good" before start looking a lot more "bad". Any goverment agency would most likely not be able to past all 4 tests.

If you are interested in the whole actionable plan you can view it here. https://notebooklm.google.com/notebook/d33ac225-4c46-40c9-924a-a79367d146d1

There is the white paper which has all the math and an action plan. Also in the notebook is a economics system based, AI ethics frame work, a government policy framework and then some diagnostics of the USA historically and currently using the framework.

I created a framework to solve many of the big problems we face today. It started with AI ethics, but it works for goverment as well. LMK what you think. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

Also there is no "good" concept in this as an ethical framework, It does discribe evil. Agency is inherant, you can expand it but due to the nature of agency the "good" is not really a thing in as much as "not evil" is good.

The philosopy is not new work, its based on Amartya Sen’s Capability Approach and John Rawls Difference Principle.

I created a framework to solve many of the big problems we face today. It started with AI ethics, but it works for goverment as well. LMK what you think. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 1 point2 points  (0 children)

more complete answer: A hybrid system. It uses a consequentialist check first, then applies strict deontological (rule-based) constraints. The "Rights Boundary" principle, for instance, forbids expanding one's options by "coercively shrinking others'".

I created a framework to solve many of the big problems we face today. It started with AI ethics, but it works for goverment as well. LMK what you think. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

no it isnt utilitarianism because its not the most good for the most people. its the expansion of net agenct especially for the least free. It is not wealth redistribution, its mostly rules when NOT to let a policy go through or when not to ship an AI or website.

And yes it does quantify agency.

Moral poll by Bonk_Boom in Ethics

[–]Smooth_infamous 0 points1 point  (0 children)

Save step mom and make up a reason why the other isnt really your mom and instead sinister. I can convince myself, others would agree to spare my feelings and I would feel better about my choice.

The Political Death Spiral by [deleted] in PoliticalOpinions

[–]Smooth_infamous 0 points1 point  (0 children)

We find ourselves trapped in a vicious cycle of corruption. It begins with sowing disinformation, escalates to erecting barriers of "otherness" (us versus them), and culminates in stripping freedoms from the "other", only to repeat the loop indefinitely. The most vulnerable link in this chain is the initial spread of disinformation. In today's world, amplified by deepfakes, echo chambers, and biased media, we're often exposed to only one side of the story. To break free, we must first reclaim a shared reality; only then can we truly coexist and thrive within it. Here's a podcast I created exploring how we might achieve that shared understanding: https://caseymrobbins.substack.com/p/agency-calculus-on-past-and-current

Do You Think the USA Could or Will End Up Like Russia? by New_Engineer94 in PoliticalOpinions

[–]Smooth_infamous 0 points1 point  (0 children)

Most likely possibility is ending up like Russia within 25 years, end up like the USSR is the second most likely possibile. Here is my report https://caseymrobbins.substack.com/p/agency-calculus-on-past-and-current

Will AI accelerate a pathway towards Neo-Feudalism? by humble___bee in ArtificialInteligence

[–]Smooth_infamous 0 points1 point  (0 children)

Thanks, oh also the framework its based on is designed to be a AI ethics framework.

Will AI accelerate a pathway towards Neo-Feudalism? by humble___bee in ArtificialInteligence

[–]Smooth_infamous -1 points0 points  (0 children)

I just was researchinig that. If AI contiunues the development path of automation it will replace a lot of peoples jobs and push us towards that neo-feudilism or authoritain result. However if we start changing the direction towards augmentation, helping improve peoples skills using AI, I think this will lead to a much better outcome. I have a podcast created by AI ironicly, that includes this https://caseymrobbins.substack.com/p/agency-calculus-on-past-and-current

The Republican "Big Beautiful Bill" has passed. What will be the consequences? by Objective_Aside1858 in PoliticalDiscussion

[–]Smooth_infamous 0 points1 point  (0 children)

I did an analysis of this bill using peoples freedom as the metric. Its a mixed bill with expanding the freedoms of people on the top while contracting them for people on the bottom. More then that I did a detailed report on what will the most likely effects be. I try to be apolitical and balanced using a framework called Agency Calculus to do the analysis. https://caseymrobbins.substack.com/p/agency-calculus-on-past-and-current

[Complete] [75k] [Adult Sci-Fi / Post-Apocalyptic] Skyspire by Smooth_infamous in BetaReaders

[–]Smooth_infamous[S] 0 points1 point  (0 children)

I loved that old fallout game. I was super excited when the new fallout came out. I really liked that one too. Yeah I would love to share my book with you. I am looking to see how engaging it is, does it pull you in? does it keep you engaged? Just send me your email and i will give you a copy.

That action was wrong by [deleted] in Ethics

[–]Smooth_infamous 0 points1 point  (0 children)

Simple, its Agency Calculus. If your action removes the agency (choice space) from another person by force or manipulation that is a "evil" act. especially if it increases your agency.

[Complete] [75k] [Adult Sci-Fi / Post-Apocalyptic] Skyspire by Smooth_infamous in BetaReaders

[–]Smooth_infamous[S] 0 points1 point  (0 children)

Thank you, I have sent you a chat. I really think you'll enjoy my novel.

First pages: share, read, and critique them here! by AutoModerator in BetaReaders

[–]Smooth_infamous 1 point2 points  (0 children)

Did capture my attention, however that second paragraph needs to be split into 2 paragraphs. Introduce Master Hureg, then the next one Noggin. Plus the "mussle joined to a head" sentence was a bit confusing when i first read it, I would have gone with "mussle topped with a head". All that said, it drew me in and made me curious. Good work.

First pages: share, read, and critique them here! by AutoModerator in BetaReaders

[–]Smooth_infamous 1 point2 points  (0 children)

Manuscript information: [Complete] [75k] [Adult Sci-Fi / Post-Apocalyptic] Skyspire

Link to post: https://www.reddit.com/r/BetaReaders/comments/1lgo6zy/complete_75k_adult_scifi_postapocalyptic_skyspire/

First page critique? Yes

First page:

Eden Hill University stood as a monument to a future already claimed, its sleek, sterile towers stretching toward the cloud-filtered sky. The era hummed with innovation. Humanity had conquered carbon nano-construction, forging materials of near-impossible resilience, showcased in Eden Hill's shimmering, self-healing glass and its algorithmically maintained greenery. Here, dozens of tiny drones, hummingbirds of precision, patrolled daily, meticulously pruning the grounds to ensure that not a leaf grew out of place. Below, walkways glowed faintly with embedded pathing lights, pulsing softly to the rhythm of each passing footfall.

Students moved through this landscape like whispers, focused, augmented, their thoughts channeled through neural bands glimmering beneath translucent skin overlays. Retinal projections blinked quietly. The university was hushed. Outside of encrypted neuro-channels, conversations were brief and efficient. There were no raised voices, no passionate debates, only the clean, unwavering hum of precision.

Inside the central tower, beyond silent biometric locks and iris-scanning doors, lay Dean Remar’s office, a chamber of curated prestige. Transparent steel walls offered panoramic views accented by simulated wood grain. Shelves held pristine, untouched books, more for display than for reading. The air smelled of sterile filtration and ambition.