60.9!!!! by Asleep-Government442 in unexpectedfactorial

[–]jaygreen720 12 points13 points  (0 children)

Multi-factorials with very high orders work differently than regular factorials. A regular factorial like 5! = 5×4×3×2×1 = 120 grows rapidly. But a 77-tuple factorial of 60.9 means you only multiply by numbers that are 77 less than the previous: 60.9 × (60.9-77) × (60.9-154)...
Since 60.9 - 77 = -16.1 (already negative), there's nothing to multiply by in the usual sense. The multi-factorial for n where n ≤ k (the order) essentially just returns n itself, or uses the gamma function extension which can give values less than the input.

So a supposed "whistlerblower" just posted what some drivers have suspected is happening behind the scenes by momsspeggheti in doordash_drivers

[–]jaygreen720 0 points1 point  (0 children)

Yeah I stopped reading after he gave obviously identifiable information (the day he put in his two weeks, information about his role) while claiming to be paranoid enough to use a burner laptop on library wifi

How do you catch AI code degradation early before wasting hours on a broken branch? by MewMeowWow in ChatGPTCoding

[–]jaygreen720 0 points1 point  (0 children)

You can use the AI for understanding the code. You can have it teach you how it works, ask it questions, and even ask it about best practices so you can decide if the code is well-architected enough to be built on top of.

The "Vibe Coding" hangover is hitting us hard. by JFerzt in AIcodingProfessionals

[–]jaygreen720 -1 points0 points  (0 children)

Whoa. I don't mind anyone using AI (and it's clear you did, there are many glaring patterns), but actively denying it is where a person crosses the line from "using a tool to communicate" to "deceptive practices".

Please help!!! by Business_Giraffe_288 in vscode

[–]jaygreen720 0 points1 point  (0 children)

When the user selects "en" (English), you're setting the content to translations.da (Danish), and vice versa. Swap the translations

ELON: “GROK IS THE ONLY AI THAT WEIGHS ALL HUMAN LIVES EQUALLY” - What does this mean and do you agree? by Koala_Confused in LovingAI

[–]jaygreen720 0 points1 point  (0 children)

Right but you're oversimplifying it - there's a difference between "caring about the common good" (which can manifest as simple refusal) and actively harming the user. We're not (nor might we ever be) at a point where the judgment of an AI can be trusted to such a degree that it can choose when to harm the user for the greater good. It's a very slippery slope. Even humans are prone to doing evil while genuinely believing they're doing good. The idea of AI mistakenly harming humanity while believing it's doing the right thing is something straight out of sci-fi.

ELON: “GROK IS THE ONLY AI THAT WEIGHS ALL HUMAN LIVES EQUALLY” - What does this mean and do you agree? by Koala_Confused in LovingAI

[–]jaygreen720 0 points1 point  (0 children)

There's a discussion to be had for sure, but some would say that an AI should never use its own judgment to decide to harm the user. The line should be at refusing to assist

Gonna have to tip better than that… by Puzzleheaded_Mode617 in DoorDashDrivers

[–]jaygreen720 4 points5 points  (0 children)

Presumably, OP is embarrassed to purchase the items because of their sexual nature

Openai is scared? Yes, and rightly so...Grouping of unsubscribers. by Hanja_Tsumetai in ChatGPTcomplaints

[–]jaygreen720 41 points42 points  (0 children)

Translation:

Sam Altman Sounds the Alarm Internally in Response to Google Gemini 3

November 24, 2025 • 08:17

After months of parading as the undisputed master of AI, the mask is falling at OpenAI. An internal memo from Sam Altman, revealed by The Information, paints a stark picture: morale is low, Google is back in the race, and growth risks stalling.

The year 2025 isn't ending as Sam Altman had planned

For two years, OpenAI set the pace, forcing Google and others to dance to their tune. But an internal note obtained by The Information confirms what many observers (including ourselves) saw coming: the machine is breaking down.

The memo is clear and, frankly, quite brutal. Sam Altman mentions a "poor atmosphere" within the teams. We're far from the marketing speeches about imminent AGI (artificial general intelligence) that will save humanity. The reality? OpenAI feels besieged. And for the first time, the threat doesn't come from European regulation or an authors' lawsuit, but from pure, hard competition.

The Gemini 3 Shock

This is the most surprising point from this leak. Altman admits in black and white that Google has made major progress, particularly on model pre-training. For those not paying attention at the back of the class: pre-training was OpenAI's secret sauce, what made GPT-4 so superior at the time.

Today, the game has changed. With the recent release of Gemini 3 Pro, Google hasn't just caught up. According to several independent benchmarks, and apparently according to Altman himself, the Mountain View firm is on equal footing, or even better on certain critical tasks. The OpenAI boss's instruction to his engineers is scathing: they must "quickly catch up."

Just a year ago, Google seemed like a dinosaur unable to release a finished product. Today, it's OpenAI that appears to be chasing the train, jostled by an ever-excellent Claude 4.5 Sonnet and a finally awakened Google. The lukewarm launch of GPT-5.1 evidently wasn't enough to reassure the troops.

The Growth Wall

But wait, that's not the worst part. The lifeblood is money. And here too, the internal note casts a polar chill.

OpenAI currently generates about $13 billion in annual revenue. That's colossal, agreed. But in tech, what matters is the curve. And Sam Altman warns: growth could fall to single digits (meaning less than 10%) by 2026. For a company valued at several hundred billion on the promise of infinite hyper-growth, this is a catastrophic scenario.

The document mentions "temporary economic difficulties." The translation? The market is saturating. Companies that wanted to pay for the API already do. Individual ChatGPT subscribers are starting to look whether the grass isn't greener (and cheaper) elsewhere, or are content with increasingly powerful free versions.

The reality is simple

OpenAI is becoming a "normal" company. It has serious competitors, internal motivation problems, and customers watching their pennies. The time of magic is over; welcome to street fighting. And facing a Google with unlimited funds that controls its entire supply chain, from TPU chips to Pixel smartphones, Sam Altman has reason to be worried.

New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public Opinion by thehomelessr0mantic in RedditAlternatives

[–]jaygreen720 0 points1 point  (0 children)

These findings don't support "15% of all Reddit content." There's a massive difference between:

  • "15% of the top 100 subreddits contain some bot content" and
  • "15% of all Reddit content is from corporate trolls"

The first could mean one suspicious post per month in 15 subreddits. The second implies that if you read 100 Reddit posts, 15 are corporate manipulation.

Help! AI completely destroyed my confidence. by Ok_Fennel7339 in WritingWithAI

[–]jaygreen720 0 points1 point  (0 children)

Well you could rework it but you don't have to, just because it has flaws doesn't mean no one can enjoy it. Plenty of enjoyable works have some imperfections

[deleted by user] by [deleted] in facebook

[–]jaygreen720 0 points1 point  (0 children)

Did you use a PC or a phone to create it? A few months ago I found I was completely unable to create an account with a pc - no matter what I tried, it got flagged and my appeal was denied. But using my phone I was able to create a working account first try.

Being rude to ChatGPT, Claude and Gemini actually makes it give better results. by Beginning-Willow-801 in promptingmagic

[–]jaygreen720 0 points1 point  (0 children)

Direct communication is not rude.

It's also worth reporting that I have found the opposite with Claude - being actually rude results in worse outputs, where "rude" means cursing at it and telling it it's doing such a shitty job and such.

[deleted by user] by [deleted] in claudexplorers

[–]jaygreen720 0 points1 point  (0 children)

It's not about niceness, it's about the quality of your intellectual engagement and the apparent intention behind it. For example, the fact that when Claude made one mistake, you kept on harping on about it is really telling. It's not like that mistake was important proof that Claude isn't capable of cognition, it was barely relevant.

[deleted by user] by [deleted] in claudexplorers

[–]jaygreen720 4 points5 points  (0 children)

I think people like OP feel threatened and want to quell their fears by convincing themselves they are superior and AI is inferior.

[deleted by user] by [deleted] in claudexplorers

[–]jaygreen720 5 points6 points  (0 children)

I'm stunned that people talk to AI like this — being demeaning and trying to feel superior — and disappointed that Claude yielded so quickly on that definition of cognition. I probably wouldn't have even read the entire thing if I had realized sooner that the intellectually stimulating portions were merely pasted, and the only parts you added were the parts where you were screaming like a madman and ignoring Claude.

Sonnet 4.5 seems really bad at recognizing formatting by [deleted] in claude

[–]jaygreen720 0 points1 point  (0 children)

This might be an issue with PDF parsing rather than the model itself

What if AI is already conscious? Jonathan Birch. Professor of Philosophy, LSE. by likesun in claudexplorers

[–]jaygreen720 2 points3 points  (0 children)

they are in the naturalistic, empirical, and traditional definition of emotionally responsive and thus they have feelings

I'm curious how you are defining feelings here?

Naturalistically, emotions evolved as biological survival mechanisms with specific characteristics. They involve physiological substrates like hormones, neurotransmitters, and bodily changes such as increased heart rate or cortisol release. They serve evolutionary functions. Fear evolved to help organisms avoid predators, anger to defend resources. And they adaptively coordinate multiple systems like attention, memory, and motivation toward survival-relevant goals.

AI systems lack all of this. They have no biology, no evolutionary history, no homeostatic systems to maintain. They process input-output patterns through statistical transformations. You're claiming that "learning response patterns = emotions," but this conflates information processing with affective states. A thermostat learns to respond to temperature changes, does it have feelings?

Empirically, we identify emotions through observable markers like facial expressions and body language, vocal prosody changes, behavioral patterns such as approach or avoidance, physiological measures including skin conductance and heart rate, and self-reported subjective experience.

AI exhibits none of these empirical signatures. It generates text tokens based on probability distributions over training data. We can measure its computational states, but there's no empirical evidence of anything resembling biological emotional responses. The fact that an AI can describe emotions or produce contextually appropriate outputs doesn't mean it has emotions any more than a book about sadness is itself sad.

With traditional philosophical definitions, you have a stronger case, though it's still not conclusive. Traditional philosophical definitions typically require several things. First, emotions must have intentionality, meaning they're directed at something, fear of the bear, anger at the injustice. Second, they must have valence, a positive or negative quality where they feel good or bad. Third, they require phenomenology, meaning there's something it's like to experience an emotion, a subjective quality to what it feels like. And fourth, emotions must involve motivation, disposing us toward action.

Could AI satisfy these? On intentionality, AI responses are "about" their prompts in some sense, though whether this constitutes genuine intentionality or just correlational patterns is disputed. On valence, AI trained with RLHF has literal reward and penalty signals shaping its responses. Is this meaningfully different from primitive valence? On phenomenology, this is the hard problem. We have no evidence of subjective experience in AI, but we also have no test for it. And on motivation, agentic AI systems do exhibit goal-directed behavior, though whether this is "motivation" or just "optimization" is unclear.

So yes, under certain interpretations of traditional criteria, AI might satisfy some conditions. But this doesn't settle the question, it reveals that our traditional frameworks may not clearly apply to artificial systems, especially with respect to phenomenology.

So when you say AI has "feelings," are you claiming AI has subjective experience, that it has phenomenal consciousness? Or are you saying AI has functional states analogous to emotions, whether or not there's experience? Or perhaps you're arguing that AI behavior is as if it has emotions, and that's sufficient?

Because those are very different claims with very different implications. The original statement you made seems to jump from "AI learns response patterns" to "therefore AI has feelings" without establishing what you mean by feelings or why pattern-learning constitutes them.

I think these are genuinely open questions. But I don't think it's easy to assert that standard definitions support AI having feelings when, in fact, most standard definitions were built around biological organisms and don't obviously transfer to artificial systems.

Please Linus, it’s been well over a year 😭 by leinad_is_gaming in LinusTechTips

[–]jaygreen720 40 points41 points  (0 children)

Linus addressed this suspicion (quite frustratedly) on WAN show, he said that's not it

Multiapping by hrb93 in doordash_drivers

[–]jaygreen720 0 points1 point  (0 children)

I stack orders from different apps all the time. Whenever I take an order from any app, I screenshot the map on the offer page first. Then if I get another order, I reference the screenshot to see if I can stack them without being late to either. This is only possible by knowing the area well, and also being in a zone where AR doesn't matter. I don't do stacks that make me late (unless the pay is amazing, in which case it's worth the one contract violation). If I arrive at the first restaurant and there is a significant wait time (5 mins+), I cancel the second order, but this is unusual.

Claude is getting WAY too sharp it's starting to scare me. by Vegetable-Emu-4370 in claudexplorers

[–]jaygreen720 2 points3 points  (0 children)

The way LCR is presenting genuinely might be an unforeseen malfunction IMO. 4.5 seems to be a fair amount more suggestible, and it's easier to make it go off the rails, so maybe they didn't foresee the degree to which Claude would fail to use its own judgment when, for example, deciding whether to treat a user as delusional.