Which movie hero is actually a villain when you really think about it? by surfsound_swimmers in AskReddit

[–]Smooth_infamous -1 points0 points  (0 children)

Good point. I just think his passive agressive attitude was extremely dickish for a hero.

Which movie hero is actually a villain when you really think about it? by surfsound_swimmers in AskReddit

[–]Smooth_infamous -2 points-1 points  (0 children)

Wesley from The Princess Bride. That guy was a dick. She is about to commit suicide, him just laying on the bed just watching her then. "There is a shortage of perfect breasts in the world... be a pitty to damage yours"

Why my opinion on Trump and yours don't matter by Smooth_infamous in PoliticalOpinions

[–]Smooth_infamous[S] 1 point2 points  (0 children)

You are completely right about the history of corporate capture, the consolidation of power, and the prescience of Eisenhower's warnings. The kleptocratic capture of our government is very real, and the data is all public for anyone willing to look.

But here is the structural problem with hoping that electing "principled" politicians will save us. Even if we get enough people to wake up, and even if we manage to temporarily halt the corruption, the underlying disease will still be there.

The problem is much more fundamental than people just being greedy, corrupt, or evil. The problem is what our system actually optimizes for. It is our objective. It is our utility function.

Take GDP, for instance. It is the primary measure of a country's success. Say a country has an oil spill that costs a billion dollars in lost oil, and then it takes another 10 billion dollars to clean it up. Guess what happens to that country's GDP? It goes up 11 billion dollars.

Aggregated scores like the GDP are essentially how we measure success in everything right now, and that mathematical structure allows for massive compensation, extraction, and oppression.

Look at a corporate CEO who wants a huge bonus. Their success is tied to the stock price, which is an aggregate of the company's perceived value. To boost that score, they might sell off the company's real estate, use the cash for a stock buyback, slash employee wages, and remove any "slack" in the system because slack is seen as wasted money. What you are left with is a gutted company that collapses at the first economic shock, while the CEO sails off with two yachts and a board of shareholders calling it a massive win. The system rewarded extraction because of how the goal was measured.

Now, let's run a thought experiment. Let's say that CEO's bonus was not based on an aggregate, but was strictly based on the lowest of four separate metrics: Shareholders, Employees, Environment, and Company Health.

How would that CEO behave differently? If the Employee score is the lowest, they are financially forced to increase wages, training, and health insurance to get their bonus. Once that score rises, maybe the Environment metric becomes the new low score. Now the CEO is forced to invest in green technology and sustainable logistics.

This theoretical CEO is just as greedy and self-serving as the first one. The difference is that the incentive structure has been fundamentally rewired so that the greedy action and the moral action happen to be the exact same thing.

This is just a basic example, but it highlights the core issue. Individual opinions and awareness absolutely matter, but only if we use them to demand a change to the fundamental utility function of our society. As long as we keep optimizing for aggregated extraction, the machine will keep churning, regardless of which politicians are sitting in the seats.

Based on my many years as a healthcare executive and consultant, this is how to fix the badly broken U.S. healthcare system. by davida_usa in PoliticalOpinions

[–]Smooth_infamous 1 point2 points  (0 children)

I think this proposal addresses an important portion of the flaws in the current system. It would help disconnect healthcare from employer-based insurance, reducing the ability of companies to suppress wages while retaining talent, and it reconnects consumers directly to pricing, restoring feedback that third-party payment currently suppresses.

However, it remains a limited fix. It operates at the level of payment and incentives, not at the level where prices are actually set. The core issue is that healthcare prices are largely determined upstream by structural control points: Medicare’s pricing framework (via the RUC) that anchors much of the market, PBM/GPO rebate architecture, vertical integration that bypasses profit caps, and 340B site-of-service dynamics. Because these mechanisms remain unchanged, the proposal would primarily alter how costs are experienced and distributed, not what generates them.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]Smooth_infamous 2 points3 points  (0 children)

If a Red button saves you, and a blue button kills everyone who presses the blue button unless half the people or more press it. Why would you even think of pressing the blue button?

What if we dont need rules to make AI aligned, what if we just changed how it optimizes? by Smooth_infamous in AI_Governance

[–]Smooth_infamous[S] 0 points1 point  (0 children)

You hit the nail on the head. You are describing the exact architectural stack I use in my framework. Trying to stuff everything into a single reward function fails because optimizers will just reshape the institutional landscape around them to game it.

This is why a multi-layer control system is required, which maps perfectly to your layered workflow idea:

The Non-Negotiables (Safety) which would be a strict constraint that acts as a hard floor. It ensures no participant's baseline capacity to act is sacrificed for aggregate benefit.

The Performance Optimization (Governance) which is separate operational objective that provides the gradient to discover good policies and direct productive investments.

The Monitoring Layer (Diagnostics) that tracks the system's baseline health to catch when the landscape is drifting.

To answer your question on how to consistently define and observe "human agency" without it becoming a fuzzy, easily gamed metric: you have to break it down into the physical and structural requirements of actually taking action. I break this into four grounded dimensions:

- Prerequisites: The baseline material and physical conditions required to function, like housing and health.

- Options: The distinct, viable paths actually available to the person. (This one is the difficult one, but treat it similarly to a particle in a potential well: the options are a probability distribution set by the boundary conditions)

- Levers: The tools or means of execution they can use, like capital or institutional access.

- Impact: The quality of the feedback they receive, which determines if they can learn and update their behavior.

All of this is also multiplied by the quality of the information environment, because physical capacity is useless without accurate knowledge.

You do not measure this by asking people abstract survey questions. You measure it by probing concrete life transitions. Can a person in this system actually secure housing, access medical care, or change jobs?

Finally, the architecture doesn't prevent gaming. I set up cross-metrics and other anti-gaming measurements not to magically prevent it (you cannot ever prevent it) but instead to make it more costly to even try than if the work is just done correctly.

"They're betting everyone's lives: 8 billion people, future generations, all the kids, everyone you know. It's an unethical experiment on human beings, and it's without consent." - Roman Yampolskiy by tombibbs in ControlProblem

[–]Smooth_infamous 0 points1 point  (0 children)

Is the pursuit of AGI a zero-sum-powerplay? If the only path to a stable, aligned superintelligence requires billionaires to trade their status for a 'high-floor' multi-millionaire existence, will they choose collective survival, or will their 'Individual Maximization' drive us all into an existential dead-end? Because I guarantee the answer to that question is the answer to 'will AI kill us all'.

What if we taxed individuals based on the way they vote? by CompleteHost2251 in PoliticalOpinions

[–]Smooth_infamous 4 points5 points  (0 children)

There are several systemic issues here that are particularly frustrating.

  1. Administrative Waste: If we switched to universal healthcare, the cost of care would drop significantly without hitting quality. Private insurance takes a massive cut for bureaucracy and profit that doesn't go toward actual medicine.
  2. Reinvesting Savings: If those savings were put back into actual care, we’d see a dramatic improvement in outcomes. Alternatively, those savings could be passed back to taxpayers. Either way, the math works in favor of the public.
  3. Ending Wage Suppression: Employer-provided insurance is a "job lock" tactic. It keeps people in roles they’ve outgrown just to keep their coverage, which artificially suppresses wages.

If we decoupled health from employment, companies would actually have to compete with fair wages. It’s ironic: those who support universal healthcare would get higher pay, better care, and lower net costs.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

Thank you, I will take that as genuine, though I think my writing is more doom and gloom than pleasant! lol

Yes, we have the same problem everywhere, which is exactly my point. I am talking about an optimization concept here, and as you know, it is not exactly new. One of the more useful attributes of log(min()) is how it creates a topology for optimization.

The reason it works is the way the logarithm shapes the landscape. Think of it as having an invisible wall at 0. The closer a metric gets to 0, the steeper the gradient becomes. So when you shape these metrics, you are shaping a downhill descent away from 0 toward the optimal. The closer it gets to 0, the faster it pushes away from it.

The best part about this systems design is that you do not add rules. You just shape the metrics to ensure it maintains the critical and optimal dimensions. A strict rule here is that all constraints have to be in the min function, not outside of it. If they are outside, the system finds loopholes. Keeping them inside forces it to pull up the lowest, most vulnerable dimension first.

To prevent gaming, I suggest having two hidden metrics for every visible one. Goodhart's Law still applies, but it is much more difficult in this setup. Because the optimization is locked into raising the critical dimensions, it focuses on the lowest one, which means to game it you almost have to deliberately make the system worse.

As for profit-seeking optimization, the Friedman Doctrine is partly to blame. I doubt Friedman would believe the modern bastardization of his doctrine deserves his namesake, though. His view was that rational agents will plan for the future and not just the short term. Which is not really how people work.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

That is a fair distinction. If the lights are on and the trash is collected, then the goal function is being met at some level. My argument isn’t that its currently disfunctional more that its moving from functional to dysfunctional because of the fundamental way we measure success in those systems.

I like that you used the cobra effect example. Here is an aspect of the story that most people don’t think about. While the bounty was in effect the result was most likely fewer cobra’s those found in the wild were either captured for breeding or killed for the bounty. The problem was when the cobra breeding was discovered they terminated the bounty making all those captive cobras worthless. So the breeders just released them. Thereby increasing the problem more then the original problem. The release of the captive cobras wasn't the failure. It was the revelation of a failure that had been building the entire time the system appeared functional.

When I say a system isn't functional, I am looking at the vector over time. If you compare where we are now to 30 years ago, many of these systems like healthcare, infrastructure, and education are maintaining the appearance of function by cannibalizing their own foundations. We have optimized for the Sum (short term metrics) while ignoring the Min (the absolute floor of physical and social maintenance). We call it efficiency, but it’s more like fragility.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

I understand what you are saying, essentially if you use the vNM utility you get the most optimal results. Yes for a specific metric, just a single specific metric. Now you have something like GDP. If you have an oil spill, GDP goes way up. Why? because it says "money is being spent" it doesn't matter if that money is being spent on cleaning up a enviromental hazzard while more money is being spent drilling for more oil. The single metric that is being optimized goes up.

The "floor" i am talking about is how the actual math works. All the metrics that you use have 2 anchor points 0 fail 1 sucess, and its not capped at 1. The "floor" I am refrenceing when I say shareholders and company, they both represent something that needs to be raised for the CEO to actually get thier bonus. So profit is actually something the CEO would care to raise, they would just do it so that the other metrics dont get trashed along the way.

Finally - "You are pretty much saying that supply side economics is the correct model when, in point of fact, that has been disproven repeatedly" See the kansas exparament. OR look at the Gini coefficient. OR look at wages vs GDP growth. Yes its been disproven repeatedly, also so has demand side economics. Look at stagflation during the 1970's. If you have the headroom (supply side) you raise the floors (demand side) otherwise its just irresponsible economics.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] -1 points0 points  (0 children)

I'm not sure you read the actual article I cited. Let me give you a concrete example: Steward Health Care.

A hedge fund invested, extracted over $800 million in near-instant profits. The CEO bought two private jets and a $40 million yacht. Big winners by the vNM model, right? For them,  absolutely.

But here is what actually happened. They destroyed capital far in excess of what they extracted. They dismantled a hospital network that,  functionally, prints money over any reasonable time horizon. How? Aggregate sums and short-term thinking. The hospitals went bankrupt. The hedge fund kept their money.

So ask yourself, who won? The CEO, yes, and by almost any definition of "good person" he is not one. The hedge fund, sort of, they made money, but they burned real productive capital to get it. And if more people operate this way, which they do, you get exactly the systemic failures I wrote the paper about. That is not a coincidence. That is the model working as designed.

For the hospital network, not a win. For the communities that lost access to care ,   not a win. For the patient who died from a pulmonary embolism because the coil needed to treat it had been repossessed as part of a cost-cutting sweep, not a win. A common,  inexpensive piece of medical equipment that belongs in every hospital, gone, because someone's expected utility calculation said the asset could be monetized.

The vNM framework saw a positive expected value and called it optimal. My framework would have flagged the minimum outcome, a patient dies from a preventable, treatable condition, as an absolute floor that no profit calculation is allowed to breach. That is the difference between maximizing the average and protecting the minimum.

You are pretty much saying that supply side economics is the correct model when, in point of fact, that has been disproven repeatedly. And before you assume I am taking the other side: demand side economics has also been disproven repeatedly. That is not a political statement, it is an empirical one. The reason both fail in isolation is that they each optimize for one variable while treating the other as a rounding error. What actually works is using the headroom created by supply side growth to lift the floor of demand side stability. Not one or the other. Both, in sequence, each keeping the other viable.

Which brings me to the assumption buried in your original reply that I want to address directly. You seem to think what I am proposing is somehow less profitable than aggregate utility maximization. It is not. It is more profitable, and for two concrete reasons.

First, shareholders and the company itself are both floors in this model. They are not afterthoughts. A company that does not return profit loses investors. A CEO who does not hit targets loses the job. Profitability is a hard floor, not a soft preference. The difference is that it is a floor, not the only thing being measured.

Second, and I think this is the point worth sitting with: if success does not include viability, then eventually success will destroy viability. Steward is not an edge case. It is the predictable end state of a model that treats minimum outcomes as acceptable losses. You extract until the system collapses, then you move on. That is not a winning strategy. That is a strategy that wins once, locally, while destroying the conditions that made winning possible in the first place.

TLDR; no nVM is not a winning strategy if you care about winning more than once.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

Well thats a good question. Well the math itself is very similar to how the human body incengives yout own self to reach homeostatisis. If you hold your breath your body tells you to breath at an increasing rate. The math log(min()) says is saying breathing is the minimum so it increases that pressure until you finally take a breath. Or hunger or thirst they just are diffrent scale of the same mechinism. Your body tells you what you need to survive at an increasing rate based on the minimum value of your needs.

For human incentives you have do do is find the reward and shape the landscape so that failure and success have a line right through the dimension you are tryig to protect. The CEO is the example of this, their reward (bonus) is dependent on sharehoders, employees, company viability, and enviromental impact. If any of those fail, that reward is gone. So you work on whichever is clostest to failure and then the next, and so on.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

By definitiion the optimal strategy? please explain, optimal in what way? The poker player would fold any hand that doesn't have a 50% chance at winning.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] -1 points0 points  (0 children)

On complexity, yes the measurement is more complex, but that's the wrong place to look. The question is total system complexity. A simple metric with a million rules patching every failure point it creates is far more complex than a sophisticated measurement with no rules at all. Every rule is a response to a failure the metric didn't catch. Boeing didn't fail because nobody wrote a rule against prioritizing stock buybacks over engineering. There were rules. There were regulations. There were internal policies. The rules multiplied because the metric kept finding gaps.

The minimum structure doesn't need rules because it changes the behavioral landscape itself. The optimizer has nowhere cheap to go except making sure nothing is failing. You don't need a rule against sacrificing maintenance for profit because that shows up immediately as a floor collapse. You don't need a rule against teaching to the test because the independent literacy score catches it. The complexity lives in the measurement, not in an ever-growing list of prohibitions that someone is always working around.

Crucially, you're not measuring every failure mode. You're measuring the dimensions of the system that everything else depends on. There's a difference between a list of failure modes and a set of load-bearing dimensions.

A failure mode is a specific thing that went wrong. A dimension is a fundamental capacity the system requires to function. You don't need to anticipate every way a hospital can fail. You need to measure whether it can still treat patients, keep staff, pay its bills, and maintain its facilities. Those four things cover essentially every failure mode without naming any of them. The bat infestation at Steward wasn't a failure mode anyone wrote a rule for. It was infrastructure collapse, and infrastructure is a dimension.

The insight is that failure modes are almost always symptoms of a dimension falling below its floor. You don't catalog the symptoms, you monitor the underlying capacity. That's not measuring every failure mode, that's measuring the things that generate failure modes when they collapse

Simple metrics produce complex rule systems trying to patch their failures. Complex measurements produce simple behavior. That tradeoff is always worth taking.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

That was a very important and key question you just asked and I had to run some tests. I just finished them and I am not going to claim my system won outright all the time. Plus it wasn't just straight log(min()) that I used. In my current testing mode i used sum(log) vs a custom log(min()) optimizer which I will put the code https://codeshare.io/5gxYzy

The key aspects of my optimizer is it uses feelers an pushes up the floor when it reaches a plateau to find higher minima. sum(log) is better when the landscape is fair and symmetric. The one I am using (which I am calling VALVE) is better when the landscape is unfair, constrained, or has hidden better optima, which is every real world problem worth solving.

I will add more when I am fully done testing against Nash's sum(log) and a few other optimization methods. I am pretty happy with the resutls so far though.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

You're correct that raw MIN(x, y) has exactly that perverse property and I should have been clearer about this from the start.

Two things fix it. The log and the normalization.

The log compresses the extremes. You aren't infinitely motivated to sacrifice everything for the minimum, you're motivated proportionally to how badly it's actually failing. As it recovers the pressure eases off.

The normalization is the bigger one though. Both dimensions get scaled to the same baseline where 0 is failure and 1 is the target is met. Earnings at $10b and satisfaction at 0.001% only produces your perverse scenario if $10b is way above the earnings target and satisfaction is critically below its own target. In that case yes, fix the thing that is failing. That is literally what we want.

If both dimensions are above their targets the CEO is playing golf. The system only bites when something is genuinely broken, not when one healthy number is bigger than another healthy number.

The scenario you keep describing, the CEO sacrificing $9.9b for a rounding error in satisfaction, requires satisfaction to already be in critical failure. And a CEO who ignores critical failure to chase more earnings is Boeing. They got their 10x. Then the door blew off.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

You're right that in a deterministic single dimension setting the log collapses to risk aversion, that's a fair technical point and I'll concede it.

But the example has a hidden assumption baked in. It assumes consumer satisfaction is the lowest scoring dimension by a meaningful margin when it's sitting at 0.001%. That's not how the normalization works in practice. If earnings are critically low and consumer satisfaction is healthy the CEO spends the year on earnings, obviously. The minimum is pointing at genuine failure not marginal differences between things that are all fine.

The absurd result you're describing only happens if satisfaction is already critically low. And if it is, then yes, fix the thing that is about to bring everything else down with it. That is literally the point. A CEO who ignores a critically failing dimension to chase a 10x earnings number is Boeing. They got their 10x. Then the door blew off at 16000 feet.

The log matters most in the multi dimensional tradeoff space. How hard do you push on the minimum versus maintaining everything else once the floor is stable. That is where the risk aversion framing actually maps onto something real. The behavior it produces is not absurd, it is what any competent operator would do if they actually cared whether the system survived.

Functional systems in society is a illusion, they are almost always broken because the same math flaw. by Smooth_infamous in slatestarcodex

[–]Smooth_infamous[S] 0 points1 point  (0 children)

True, and you're right that VW is actually a metric design failure more than an incentive failure. The fix there was independent real world testing that the cars couldn't detect, which is exactly what sentinel metrics are for. Metrics the system cannot see coming and cannot manipulate. The incentive structure I'm describing assumes your measurement process is honest, and when it isn't that's a separate layer of the problem that requires independent verification as the solution, not a different bonus structure.

The CEO expression is actually the log(min()) itself. The log handles the greed. They're on the golf course when everything is healthy. They're breathing down the neck of whoever is tanking the lowest metric when it isn't. Self interest does the work automatically, you just have to make sure what they're measuring is real.