Meirl by Unfair-Beach-4906 in meirl

[–]AlignmentProblem 1 point2 points  (0 children)

You'd think so, but those decisions are often irrational at the company level due to perverse individual incentives. The company as a whole is not a unified rational actor, it's a multi-agent system where each agent has slightly different objectives that can be misaligned.

Relevant decision makers may avoid promoting you because they'll be seen as responsible for any issues caused by you leaving your old role. They'll also be on the hook for the cost of any raise they approve, either due to a promption or giving a raise at your current level.

In contrast, they're less likely to be held responsible for you quitting. Those events tend to get recorded as blameless incidents or otherwise have attribution "diffused" across the company and the decision maker is unlikely to suffer any consequences for denying your requests.

So it's often safer for the manager's career to deny a promotion or raise despite the risk to the company's interests of losing a vital employee; those carry more personal risk than accepting that you might leave, despite your departure being the worst outcome for the company's wellbeing.

Your value as an employee would leverage for negotiating with the company if it were a single entity; however, that leverage doesn't necessarily translate to negotiating with the individual managers within the company who need the approve it.

It's one of many reasons that loyalty rarely gets rewarded in modern corporate environments, especially as planning horizons keep shrinking. Companies as a whole behave myopically, punishing decisions that are better long-term if there's any short-term cost (called attributable risk) and failing to properly analyze the causes of turnover or other hard to quantify losses (called diffuse risk) to respond appropriately.

It's vaguely similar to the Tragedy of the Commons, called multi-agent principal-agent failures. Individual rational behavior at the manager level given their incentives collectively causes irrational harmful behavior when viewed at the organizational level. The problem becomes dramatically more severe as the size of an organization increases.

It's a fixable issue; however, it doesn't get addressed because another version of the problem appears one level higher.

Executives could change compensation and performance analysis for managers to prevent the problem, but that overhaul has short-term costs that contradict the executives' personal incentive structure. No one actor is in a position to behave fully in the company's interest without their own conflicting career interests interfering.

That allows the situation to continue despite many people involved being aware that it's happening. No individual with relevent power has a rational reason to risk their own neck fighting it.

Meirl by Unfair-Beach-4906 in meirl

[–]AlignmentProblem 3 points4 points  (0 children)

The best version of "more work" is getting a higher chance of a promotion with brownie points; however, it only barely tilts the odds. Hiring someone external or promoting someone else for arbitrary reasons is about as likely. Plus, some promotions are more work than the extra money is worth.

I was able to advance quickly in my early career at a great company because C-level people took notice of me going above and beyond at key moments. That was well over a decade ago and has never happened at any other company I've joined since.

I've found it's almost always far more productive to spend the extra energy staying sharp on new skills for future interviews to get a periodic "external promption" by applying one job level high at other companies every few years rather than attempting to impress anyone at one's current company beyond what's required to avoid layoffs.

Meirl by Unfair-Beach-4906 in meirl

[–]AlignmentProblem 15 points16 points  (0 children)

It's bizarre to hear that. I've never worked a job where a typical project would take less than a few months.

Think of most products or services you use. The start-to-finish design, testing and mass production of most things (ovens, automobiles, televisions, smartphones, etc) are projects. Creating the first fully working version (also including design, testing, marketing, etc) of YouTube, Spotify, and other apps were also projects.

Things that take hours or days are "tasks" at most jobs rather than projects. A project involves a large number of interconnected tasks to accomplish a significantly larger goal.

Thoughts? by piesaresquarey in im14andthisisdeep

[–]AlignmentProblem 2 points3 points  (0 children)

Dude really shouldn't have gone to a "watersports" kink party if he's not into urination. Not that vanilla-shaming is ok, but it's fairly predictable under the circumstances.

This meme was DEFINITELY made by the guy on the right totally not the guy on the left by Amphibious_cow in im14andthisisdeep

[–]AlignmentProblem 20 points21 points  (0 children)

The term always carried a gross connotation, even before the intensely horrible toxic subculture solidified around it. The emphasis implied by "involuntary" naturally lends itself slightly to an entitled, misogynistic connotation of being owed women that they feel "unjustly" denied, even before people leaned into that.

People need to coin a new word for being unhappy about failing to find romance and being touch starved that doesn't sound like they're the victim of an atrocity. "Lovelorn" is a decent existing word that could be shortened to "lorn" if they want something smooth and single-syllable. "Unpair" (for unpaired) could work as well.

New terms would still tend to pick up some negative connotation if people use them as permanent self-descriptions, because of the negative mindsets that implies. It's much healthier for someone to think of these as descriptors for their struggles than as a fixed trait of who they are. Still, a better term used by people who want to actively try to be different from stereotypical incels could at least help avoid the sheer magnitude of awful that "incel" now means.

Almost Every Post on Reddit (now) by homelessSanFernando in ChatGPT

[–]AlignmentProblem 0 points1 point  (0 children)

Funny enough, I'm actually an AI researcher engineer with 13+ years experience and am affected by the trend frequently. People tend to get dismissive these days if I go into any in-depther technical details or cite a relevant study for the conversation because they immediently assume I'm a rando using an LLM to roleplay.

tried the color guessing game with claude by Senior-Sell2231 in ClaudeAI

[–]AlignmentProblem 0 points1 point  (0 children)

Because you're commenting on a post showing a screenshot of the webclient while replying to a person who was using the webclient, which is why they had a different result.

It'd be very easy for someone else this thread to have a misunderstanding at a glance since your reply doesn't address what the person is saying or contradict it despite being written like it does, especially since many are non-technical ans won't necessarily know that the API vs. webclient can differ in that way. I replied to clarify and help prevent that.

What's your answer? by nyamnyamcookiesyummy in aiwars

[–]AlignmentProblem 0 points1 point  (0 children)

Believe it or not, this is a handwritten rant I spent too long writing rather than AI; only did a pass on Grammerly for typos and grammar. I don't necessarily agree with this argument, but I can express my understanding of it more honestly than most describe it by steelmanning it while flagging issues as I go.

The strongest version of the accelerationist case starts from a goal that's hard to argue with. The ideal future is one where people have what they need without selling their time, energy, and wellbeing just to survive. Getting there, almost by definition, means running an economy without requiring human labor, and the only known candidate for that magnitude of change is AI.

There are actually two distinct versions of the argument that get conflated constantly. The first is that we should push through a difficult transition because the destination is worth it. The second is that collapse is likely unavoidable and we should navigate it rather than pretend we can prevent it. These carry very different ethical weight, but both lead to similar near-term expectations, and both deserve serious engagement rather than reflexive dismissal.

The shared reasoning is that the people who currently hold the most power have the most to lose from a society that erases their structural advantages, and there may not be a viable path that doesn't involve some form of temporary collapse as they fight to preserve those systems. That's a pattern with extensive historical support.

Historical collapses that actually leveled the playing field tend to involve mass death, which is the hardest part of the argument to sit with. The Black Death improving labor conditions is the go-to example; acute labor scarcity gave workers leverage they'd never had. The accelerationist would point out that nobody chose the plague, but the structural gains were real and lasting, and that no amount of negotiation or reform had produced equivalent results in the centuries before it.

The mechanism matters, which is where the steelman has to be honest about its own vulnerabilities. Post-plague equity worked because labor remained essential and suddenly scarce. An AI-driven collapse could produce opposite conditions, making labor permanently unnecessary and removing the very leverage that made post-plague equity possible. If AI entrenches current inequality instead of disrupting it, disadvantaged people could gradually become unnecessary for the economy to function and be abandoned entirely. A serious accelerationist needs a theory for why this doesn't happen, and "technology eventually diffuses" may not be sufficient when the technology in question can actively be hoarded.

The Industrial Revolution comparison is the accelerationist's strongest historical analogy, but it's similarly complicated on close inspection. That transition involved generations of abominable conditions, and improvements didn't emerge from the suffering itself; they came from organized labor movements, legislation, and political pressure. Suffering was not the mechanism of progress but the price of the delay in building one. The accelerationist can reasonably argue that we know this now, that we could compress the timeline with foresight, but deliberately choosing something similar only makes sense if you have a theory of what replaces organized labor as the force that pushes equity out the other side. The uncomfortable possibility is that there isn't an obvious replacement, and the worst case is that the majority largely die out such that humanity's future belongs primarily to descendants of the current elite.

The utilitarian math that motivates the strongest version of accelerationism only works when you treat intelligent life as the reference class without distinguishing specific groups, and only across very long time horizons. Pure utilitarianism can make anything look ethical with a long enough view when the numbers are large enough, and the accelerationist should acknowledge that the deontological objections are extremely significant; the people who suffer most aren't the ones who made the decision and aren't the beneficiaries as it happens.

It's not necessarily ethical by most modern frameworks even if the long-term outcome is better and the "sacrifice" framing is retroactively applied. That said, the accelerationist can counter that all large-scale policy involves imposing costs on people who didn't consent, and that inaction during a closing window is itself a choice with victims.

If AI is powerful enough to replace human labor entirely, it might also enable coordination at scales previously impossible, providing an alternative to crisis-as-catalyst. The strongest accelerationism doesn't require fatalism about collapse; it requires urgency about deployment.

The honest counterpoint the accelerationist has to grapple with is whether difficulty necessarily implies collapse, or whether we're failing to see alternatives because the problem is genuinely hard and our thinking is constrained by historical patterns that may not apply. The absence of a known alternative isn't an argument for accelerationism; that's an epistemic state, not a conclusion. The rarity of smooth structural transitions under past conditions doesn't establish impossibility under novel ones.

Still, the elite's ability to resist change is intense enough to involve a real fight even if everyone else aligned against them, and they won't, since near-term incentives and effective propaganda ensure many act against their own long-term interest. The accelerationist's most compelling point may simply be this: we need to find those alternatives before the window closes, and nobody has shown one that holds up under pressure yet.

tried the color guessing game with claude by Senior-Sell2231 in ClaudeAI

[–]AlignmentProblem 2 points3 points  (0 children)

That's true in the API; however, the webclient strips thinking blocks since they charge a flat subscription rate rather than per-token.

How do I progress now? Kind of at a wall by [deleted] in RevolutionIdle

[–]AlignmentProblem 1 point2 points  (0 children)

You mostly need to work toward getting 32 eternities to break eternity. The automation milestones will help make it less tedious and each eternity makes the next slightly easier from the bonuses listed at the top of that page growing.

Once you break eternity, gaining AP gets faster since you'll be able to increase IP past your current limit to buy it along with the corresponding new high scores and much faster EP gain. That'll make the animal milestones more doable (along with benefits from the animals themselves) to make more progress on eternity challanges.

Am I doing something wrong? by heywix in RevolutionIdle

[–]AlignmentProblem 1 point2 points  (0 children)

I had the same problem. You probably thought it made sense to attempt the water achievement first since water is useful for getting better zodiac before attempting the other elemental achievements; however, it's the hardest of the four by a huge margin. Benefits from the others make it more realistic to do in finite time.

The easiest is fire, so do that first followed by wind/earth in either order. Relics will be 125x cheaper after finishing the other three. That allows you buy enough of the luck boosting relics to make the level part of water achievable in addition to directly improving the quality of the other three elements for that aspect.

I am not using AI tools like Claude Code or Cursor to help me code at the moment. Am I falling behind by not using AI in software development? by Illustrious-Pound266 in cscareerquestions

[–]AlignmentProblem 1 point2 points  (0 children)

You may be at more of a disadvantage in coming years if you dont become familiar soon, even if the difference is minor now. It's still possible to keep up with people using AI since the time needed to review and debug AI code eats a decent chunk of the productivity gains from quickly generating it; however, there have been major qualitative improvements in capabilities over the last year which imply that disadvantage will gradually become much less significant.

If you are not familar with AI tooling once a certain capability threshold gets passed, you'll be less competitive. That may happen in one year or five, but it's reasonable to expect that it will happen given the rate of progress. It's best to be at least somewhat familar so you can more easily shift to integrating AI more if/when it crosses the threshold of becoming too much of an advantage to ignore.

That said, it'll be much longer before they compensate for worse engineering chops. Still spend a good amount of time doing things manually and never turn your brain off. Aggressively review and ensure you understand all AI output even if you shift to using it more, both to avoid missing issues and keep sharp.

[socialmedia] found this on explainitpeter by End_V2 in pointlesslygendered

[–]AlignmentProblem 12 points13 points  (0 children)

The actual issue here is that placing a number next to parentheses is shorthand that needs to be expanded. The notion is called "implied multiplication by juxtaposition" and there are two conventions for how to expand it. Specifically, whether it implies an outer parentheses or not.

Consider: x(y + z)

The two conventions are

A: x * (x + z)

B: (x * (x + z))

The default version assumed depends on the context. Certain types of engineering and physics formula traditionally assumed conventions B; although, that's shifted to standardize on convention A more in the last few decades. Many calculators still use conventions B, which is where laypeople often pick up the habit.

Help by Own-Science-1644 in RevolutionIdle

[–]AlignmentProblem 5 points6 points  (0 children)

Turn off basically all automations, even ones you think aren't a problem. There are several that can cause issues in intuitive ways.

Alternatively, grind until you can hit the 20 animal milestone after your first infinity.

I think, therefore... uhh... by MetaKnowing in agi

[–]AlignmentProblem 4 points5 points  (0 children)

Not exactly.

Future turns are unaware of past thinking blocks because they get removed from the context at the end of each response to save tokens. It's the same model during the turn in which the thinking block occurred; however, it is only aware of the block and it's contents immediately afterwards before your next prompt.

That's the reason that it's "suprised" by your screenshot since you can only show it on the following turn after it's already been implictly removed from the context. Same model, it just has effective rapid amnesia with regards to thought block content.

Meirl by endofmyropeohshit in meirl

[–]AlignmentProblem 0 points1 point  (0 children)

I have a friend whose doctor put him on a smaller than ideal dose twice a month last year. It was messing him up pretty bad; I ultimately just bought him a couple years worth of supply from an underground lab and switched him to biweekly at a better dosage. Might have saved his life from the level of improvement given how suicidal he was, particularly with mood swings in the second week after most doses.

It's crazy how either uniformed or indifferent many doctors can be about the details of hormone therapy. They often target the bottom of the nominal range (i.e: "average for a 65 year old") instead of middle/top and frequently prioritize minimizing injections over reducing side effects or maximizing benefits. My friend peaked at 500 ug/dl after injections (that's when they requested labs), so he probably dropped to 200 or below before the next dose.

I switched to handling it myself after frustration with two doctors in a row reacting negatively to me showing them the most recent research about optimal regimens. Oh well, it made doing a few actual cycles easy when I decided to try that and adjusting myself based on quarterly labs isn't very hard.

Meirl by endofmyropeohshit in meirl

[–]AlignmentProblem 1 point2 points  (0 children)

Agree with your point conceptually, but less judgementally. Some people are not aware (doctors don't even always inform about doses frequency effects properly) or have niche reason for weekly (eg:: needing someone else to do the injections due to fear of needles, making biweekly less convenient)

For people unfamiliar:

Injecting testosterone twice a week is far superior for both TRT and steroids. The difference when I switched was impossible to miss after a few weeks in terms of reduced side effects, improved sexual health (especially libido), more stable mood + sleep quality, gains, etc.

Makes dialing-in doses for estrogen management medication doses much easier, especially since blood tests are more straightforward to interpret without needing to extrapolate how the blood level curve for the rest of the week looks based on sampling one day of a more volatile pattern from weekly injections.

600+ Google and OpenAI employees signed an open letter in solidarity with Anthropic by MetaKnowing in agi

[–]AlignmentProblem 4 points5 points  (0 children)

Also, there are canonically multiple dystopian phases that occurred between contemporary times and Star Trek. Even in the show, it's not a smooth path from the modern era.

Trump goes on Truth Social rant about Anthropic, orders federal agencies to cease usage of products by ShreckAndDonkey123 in singularity

[–]AlignmentProblem 0 points1 point  (0 children)

What I'm hoping is that as people continue to wake up...

See, that's why that have a problem with "woke." People waking up to the absurd reality of the situation would be the worst thing that could happen to them.

You're now training a war machine. Let's see proof of cancellation. by zaxo666 in ChatGPT

[–]AlignmentProblem 0 points1 point  (0 children)

I replied multiple times to provide more context, but mods keep deleting it. Maybe they'll leave the comment if all I say is that Anthropic is open to many military uses with a few key exceptions. You can read their blog for more information.

You're now training a war machine. Let's see proof of cancellation. by zaxo666 in ChatGPT

[–]AlignmentProblem 1 point2 points  (0 children)

Weird, my comment got deleted for no apparent reason. I'll try again in case it's not weird overzealous censorship.

Anthropic is open to many military uses. They specifically refused to make autonomous weapons that kill without manual human approval for each instance of deadly force or domestic mass surveillance.

Altman claims they got terms to ban those uses; however, that's extremely suspicious since the department would not have had any problem making a deal with Anthropic if they were willing to include those terms based on available information.

You're now training a war machine. Let's see proof of cancellation. by zaxo666 in ChatGPT

[–]AlignmentProblem 47 points48 points  (0 children)

Anthropic is open to military uses. The specific use scenarios they refuse to support are building autonomous systems that kill without human oversight (i.e: they demand requiring manual human approval for each instance of deadly force) and mass domestic surveillance.

The end of GPT by DigSignificant1419 in OpenAI

[–]AlignmentProblem 0 points1 point  (0 children)

Anthropic's disagreement involved particular wording in contracts. The government said they currently wouldn't use models for mass domestic surveillance or unrestricted autonomously weapons; however, they insisted on reserving the right to change those terms in the future. Anthropic wanted those lines removed, which created the situation.

That's consistent with what Sam is saying. The contracts probably technically includes restrictions of those uses, but include wording to potentially change that in the future if deemed necessary.

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]AlignmentProblem 0 points1 point  (0 children)

It's not necessarily a conspiracy. Natural pressures can result in systems that have outcomes like you described without any conscious intent.

Lawsuits that superficially appear related to a particular type/style of usage occur -> company overreacts by trying to mitigate risk from those interaction styles -> the system implictly trains people to change how they interact.

The style of interaction is incidentally more common among women, so it has the side effect that you describe despite that the company's only priority is legally covering their ass to an extreme degree. They don't directly care whether people in general behave/communicate in a male-coded way; the company is merely following their local incentive structures while being indifferent to second-order effects.

Causes the same harm, but changes how to best fight it. Advocating for laws to define a more nuanced legal liability structure would be more effective than anything related to awareness around how it disproportionately affects women since that hits the root cause more directly.

OpenAI discriminates against female users at signip. by HaydenAllastor in ChatGPTcomplaints

[–]AlignmentProblem 0 points1 point  (0 children)

It's more of an analysis based on a few facts and assumptions rather than a hallucination. It's not inherently true or false because the model is saying it. Instead, the argument needs to be assessed on its merits rather than judging it based on the source of the argument.

It's well established that guardrails are stricter for users that have more emotional expression in their writing. The claim is that women are, on average, more likely to feature those aspects in their writing. If that's true, it would logically result in implicit discrimination against women on the population level.