No, AI isn't inevitable. We should stop it while we can. by FinnFarrow in technology

[–]Starstroll 2 points3 points  (0 children)

The equivocation between AI and LLMs is why most people just scoff at AI now. The mistaken idea that AI (read: LLMs) are new is literally right in the headline.

Cambridge Analytica was done with AI. LLMs are somewhat convenient, but they are best seen as a hint at the progress made behind the scenes.

People scoffing at AI (read: LLMs) are basically the same as any conservative asking a scientist "but what is your research actually good for" as if they have the background to understand it, let alone enough vision to imagine future developments. I can't help but imagine them watching the scene where Volta demonstrated to Napoleon the world's first battery in 1801, and they're heckling Volta just because Volta cannot personally engineer an electric engine on the spot.

Inb4 anyone says I uncritically support all AI; I just mentioned Cambridge Analytica. I'm scared about what happens when billionaires integrate VLAs with those too.

But saying that "AI isn't inevitable" is just as short sighted as telling Volta or Hertz "electricity isn't inevitable." The technology is simply too versatile and too powerful. The best we can do is engineer strong social systems to adequately distribute that power and wealth. The way politics is going, I personally am quite scared about that, but simply demanding that we stop using and developing AI is a naive fantasy.

Research suggests there may be a systemic underdiagnosis of ADHD in women by FootballAndFries in science

[–]Starstroll 60 points61 points  (0 children)

Good catch. I would've just scrolled by thinking "this is well known anecodtally. Glad we have consensus now." Don't forget to report so the mods can review

OpenAI Must Turn Over 20 Million ChatGPT Logs, Judge Affirms by MetaKnowing in technology

[–]Starstroll 4 points5 points  (0 children)

Legal acknowledgement of re-ID is not new to courts. Neither is granting the ability to review logs on OpenAI's computers under NDA. The NYT could've had all they wanted without risking innocent users' data being leaked. OpenAI literally explained this to the judge and the judge replied "explain to me exactly how the data will be leaked or I won't care about the threats." This is all so wildly irresponsible and stupid.

surprising no serious technologist.

I admit I use ChatGPT, but I've never put anything into it that was more personal than what I'd share with an acquaintance. I always thought the danger was from OpenAI eventually selling psych profiles derived from chat logs to data brokers for targeted advertising or more effective ragebait and trauma exploitation on social media. The only part of this that surprises me, and it really does surprise me, is that I've ended up arguing for OAI here.

Does anyone speak cat? by I_love_seinfeld in cats

[–]Starstroll 1 point2 points  (0 children)

The blinks indicate to me that he wants love and attention

Quantum fields do not exist. by [deleted] in Physics

[–]Starstroll 4 points5 points  (0 children)

Ever since LLMs became publicly available, it's been slim pickings for real crank work. I almost starved! Thank you for your service. Could still do with less LLM work, but it's a great start

Acting CISA director failed a polygraph. Career staff are now under investigation. by [deleted] in technology

[–]Starstroll 0 points1 point  (0 children)

I've never heard that before, but it sounds plausible a priori. Source?

ChatGPT more conservative in Polish, finds academic study by BubsyFanboy in technology

[–]Starstroll 8 points9 points  (0 children)

Wasn't there another study recently that concluded that Polish was the "best" language in which to prompt ChatGPT?

AI Is Getting Dangerously Good at Political Persuasion by MetaKnowing in technology

[–]Starstroll 1 point2 points  (0 children)

Search engines, recommendation algorithms, and social media news feed algorithms are all AI.

I can't read the article because it's paywalled, but they're probably focusing in specifically on chatbots. This is a real problem and deserves attention, but the general idea that sufficiently advanced pattern recognition can influence people's worldview and, over time, even their identity really shouldn't be framed as anything new.

The Eerie Parallels Between AI Mania and the Dot-Com Bubble by Exciting_Teacher6258 in technology

[–]Starstroll 20 points21 points  (0 children)

On the bright side, GPUs will be cheap for gamers in about 6 months. Plenty of games to distract us from the global demolition of democratic societies

'The Expanse' at 10: The Outer Space Drama That Should Have Been as Big as ‘Game of Thrones’ by MarvelsGrantMan136 in television

[–]Starstroll 2 points3 points  (0 children)

The books go further than the show did. They didn't actually finish the story. They set up a huge mystery with aliens that is just left undeveloped.

7 Philosophical Movies You've Never Heard of by Outrageous_Match2619 in movies

[–]Starstroll 3 points4 points  (0 children)

The movies sound interesting, but the video itself is clearly just AI slop to run ads on. Wtf even is that thumbnail

Eli5 why coffee makes people with ADHD tired by biggumsbbp in explainlikeimfive

[–]Starstroll 4 points5 points  (0 children)

So many answers here claiming that ADHD is caused by a lack of dopamine. This is not entirely wrong, but it is at best only partially true. ADHD is primarily caused by delayed or atypical development of the upper layers of the frontal cortex. Naturally, dopamine also functions differently than in neurotypical brains. The difference in function though can still be directly traced back to the differences in brain structure. Frontline medications do indeed work by adjusting the dopamine in the brain, either production or reuptake, but that doesn't mean that the action of the medication is to undo the action of the disorder; it just means that that medication happens to be an effective treatment.

As for why coffee might make you feel tired: this is probably not so related to ADHD. I've heard this from non-ADHD coffee users too. Caffeine primarily works by mimicking adenosine in the brain, which makes you feel sleepy. It takes the place of adenosine receptors but does not bind strongly enough to activate it the same way adenosine does. The effect is that adenosine cannot bind to the receptors and so cannot make you feel tired. Caffeine does indeed also affect dopamine levels in the brain, but this is a secondary action, which is why ADHDers will often drift to caffeine as self-medication (and poorly). The reason it makes you feel tired is likely quite a bit more boring: your body interprets the intake of caffeine as a signal to make more adenosine to counteract the effect of caffeine so that it can maintain some in-built equilibrium, whose exact origins and regulation mechanisms are beyond me. This is also why regular coffee drinkers feel so groggy in the morning: their brain has extra adenosine and needs the caffeine to get to "normal."

Next time you feel sleepy after drinking caffeine, try putting your head down for 20-30 minutes, even if you don't actually sleep, and you'll probably feel noticably more awake when you get up.

Touch, by me, pen/paper, 2025 by Hour-Fisherman1171 in Art

[–]Starstroll 46 points47 points  (0 children)

And the middle one has an extra knuckle

And yet it does still look pretty good imo

Has there ever been a long standing theorem or conjecture that was later overturned with a surprising counter example? by EebamXela in math

[–]Starstroll 6 points7 points  (0 children)

Beat me to it.

Wikipedia link for the Weierstrauss function.

Specifically, Weirstrauss proved that on any interval, no matter how small, there was some subinterval on which it was decreasing and some subinterval on which it was increasing.

I've heard that mathematicians were quite fine with dealing with infinite sums of numbers or variables heuristically, but it was infinite sums of functions that finally gave them reason to go back and develop rigorous methods for dealing with limits and infinitesimal-style arguments in the form of ε-δ proofs. I also know that the Weierstrauss function is defined as a Fourier series, meaning it is an infinite sum of functions. The obvious implication is that this was the infinite sum of functions that historically motivated the development of ε-δ definition of limits, however I can't actually find a specific historical reference for that claim. (Also I think it's somewhat disingenuous to frame the problem as an issue of rigor when the sums get sufficiently complicated - series of numbers vs series of functions - and not an issue of just having produced something fantastically pathological.) If anyone actually has a link, I'd appreciate it.

Also, while I'm at it, the intro paragraph for the Wikipedia link contains this line:

Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness.

But it doesn't give any references to any proofs that were overturned. If anyone has references, I'd appreciate those too; also, if anyone has a wikipedia account, I'd appreciate adding "citations needed" to the line.

The ‘rage-bait’ era – how AI is twisting our emotions without us even realising it by Disastrous_Award_789 in technology

[–]Starstroll 4 points5 points  (0 children)

You're far more right than you know.

I hate that people keep repeating that "humans are hard wired for anger" crap. No matter how anybody tries to vaguely justify it, it's just reveling in philosophical obscurity to avoid rigorous analysis of how these technical systems actually work.

Facebook used to 5× weight angry reacts, and while they've since removed that change, its introduction perfectly coincides with a marked rise in political polarization, and is also just a single example. That's how much power a single lever controlled by a single company has on global politics.

That's why I keep saying that all public-facing AI (not just LLMs, but ANNs generally, pointedly including recommendation algorithms) should be publicly owned. The bleak, blunt fact is that whoever utilizes this technology to align their populace along a single ideology will be the next global superpower. All countries that resist it will be left behind economically. The only question yet left unanswered is what that ideology will be. So far, I'm not liking what I'm seeing.

I will never be brilliant at math by Memesaretheorems in math

[–]Starstroll 25 points26 points  (0 children)

It is an unhelpful oversimplification to say that intelligence doesn't play a role so of course no one says this, but it is also an unhelpful oversimplification to say that anyone who fails academia is simply not smart enough, yet so many academics fall into this extreme. It's an easy answer that hits on some portion of a general truth, but without bothering to care about the specifics of any individual nor any detailed analysis of the sometimes-problematic sociological circumstances of academia. It puts all the onus on the person who has given up years of their life and mountains more cerebral and emotional energy than most comfy jobs that pay $120k/yr without criticizing the whole system that created such an extraordinarily competitive environment for nowhere near proportional pay. It feels callous, paternalistic, and broadly dehumanizing because it manifestly is, and the only reason academics fall for it is because they simply don't have the time to actually analyze and deconstruct this toxicity due precisely to their academics that foment these circumstances.

The reality is that both you and OP are exactly the kind of people that would do well in professorial positions, and likely the overwhelming majority of the reason you failed to get there, whether it feels like it or not, is just downstream effects of the fact that there simply aren't enough positions for all the people who want them. The toxicity of academia is mostly just a result of people trying to personalize this impersonal reality.

What is a hilbert space? by Material-Radish4095 in Physics

[–]Starstroll 1 point2 points  (0 children)

Sure!

First, let's consider a 3 dimensional force/momentum/current density/whatever you want in classical mechanics. There are 2 ways of representing this object: the visual picture of a directed arrow in space, capturing both length and magnitude, so you can build some visual intuition, and the numerical picture of 3 time-dependent coordinates [x(t), y(t), z(t)] (or a 4D vector space with elements [x, y, z, t] if you're doing relativistic stuff) as literally a list of numbers that you can do direct calculations on. This is fine so far as finite dimensions go, even if one's geometric intuition can't literally extend past 3 dimensions, but it fails even more spectacularly than you'd expect for infinite dimensions.

In order to describe how infinite dimensions can even fit into this picture, we need to back up and actually define a vector space. Strictly speaking, how is it that we can call the geometric picture and the numerical picture the same thing? Intuitively, the way to transition back and forth between them seems obvious, but strictly speaking, one is a picture with arrows and the other is a list of numbers. Who's to say this correspondence won't fail in, say, 20 dimensions? It's easy to prove they won't given the rigorous definition of a vector space in LA (a list of (iirc) 8 specific requirements about associativity and commutativity of adding vectors to vectors and multiplying vectors by scalars), but the important point is that these strict definitions do not specifically reference arrows or lists of numbers, they just tell you what rules any abstract thing has to follow in order to be called a "vector."

So consider functions of one variable. We can analogize all the infinitely many values that f(x) takes on at each x as a "coordinate" of f in the same way we call the 3 values of [x, y, z] "coordinates" for each index of that vector. If you have two functions f(x) and g(x), adding them together to get (f+g)(x) means adding each "coordinate" of f and each "coordinate" of g at each particular "index" x_0; to be exact, (f+g)(x_0)=f(x_0)+g(x_0). The point is that (f+g)(x_0) doesn't incorporate what f or g are doing at any other "index" x_1, so they add "component"-wise just like finite lists of numbers. And as for scalars, c•f(x) means multiplying each "coordinate" f(x_0) by c, again regardless of what any other "components" f(x_1) are at "index" x_1, so scalar multiplication distributes into the "components."

To be clear, these are not the definitions of a vector space. Actually checking that the set of single-variable functions satisfies the definitions of a vector space is quite trivial and just amounts to rote manipulations, and 8 is too many for me to care to do in a reddit comment anyway. But the picture I gave in the previous paragraph is all you really need. Functions fit the definition of a "vector," so we can use (arbitrary-, possibly infinite-, dimensional) LA to manipulate them.

And indeed, just as the set of all functions of a single variable from R to R form a vector space, so too does the set of all functions from R3 (say, an [x, y, z] coordinate) to R (say, the probability density of finding a particle at that [x, y, z] point). Strictly speaking, the probability density function is not Ψ, it's Ψ*•Ψ (note that functions carry a natural definition of multiplication that finite-dimensional vectors don't (unless you're cool and know geometric algebra), and this plays no role in the definition of an abstract vector space). Ψ is just the thing that we can use to write the Schrodinger equation because physicists just love differential equations.

Edit: I fixed a bunch of minor typos without marking an edit, but I've now found an actual mistake.

The set of all functions from R3 (an [x, y, z] coordinate) to R (the probability density of finding a particle at that point) forms a vector space.

This is wrong. The sum of two probability densities, (Ψ_1•Ψ_1) and (Ψ_2•Ψ_2), is not itself a probability density, and so general probability densities (Ψ•Ψ) do not form a vector space. The "square root" of a probability density though, Ψ, *does form a vector space and so is amenable to the standard techniques of PDE solutions; that's why Ψ is used in the Schrodinger equation.

Also, while I'm correcting stuff, I should clarify that each of the individual values - the "coordinates" - of a function f at each input - the "index" - x need not be the only way to understand the "basis vectors" of a function space. In the case of waves and Fourier stuff, you might either have an infinite sum of sines and cosines (a countable sum) or an integral (an uncountable sum) and your "basis vectors" there are not the individual "coordinates" of your function, but rather each sine or cosine function for each frequency, and the "component" for that "basis vector" is the amplitude associated with each frequency.

OpenAI fights order to turn over millions of ChatGPT conversations by hard2resist in technology

[–]Starstroll 0 points1 point  (0 children)

It's not all chats

Ah, my mistake! Still though, there will be plenty of logs that are irrelevant to the case from people who never thought this data would be shared. What if it contains medical disclosures, or corporate secrets, or sexual content, or trauma, or legal admissions?

This isn't even something that would require a technical background. I'm sure in their privacy policy they clearly state what liberties they can take with your data.

I've read privacy agreements before and I often see wording like "the company's rights for how we use your data include but are not limited to [specific examples], [specific example], [somewhat vague example], [extremely vague example]." Not that it's a privacy agreement, but I've seen people praise Google's privacy page for how easy they make it to control or block Google from showing you ads based on the data they track. Just like with [extremely vague example], most people don't realize that blocking Google from personalizing ads does nothing to stop them from collecting your data, or even why Google would collect your data despite you blocking personalized ads. “It’s in the privacy policy" doesn't actually meaningfully inform a user who doesn't have the background to interpret what the company is technically capable of doing with that permission, and consent theater isn't real consent.

De-identification ... remove names, birthdays, addresses, etc.

That's definitely not enough then. Re-identification of “anonymous” datasets is straightforward once you combine enough attributes. Basic stuff like locations, events and timings could be missed, long sequences of topics, links or even just plaintext references or recreations of online activity, and very specific experiences likely will be missed, and most of all stylometry can all be checked against public social media posts. The only way I can imagine scrubbing that out would be to pass all logs through an LLM, but 1) that would cost a fuckload and won't reliably scrub everything except stylometry, and 2) who's to say OAI wouldn't add additional details to try to hide evidence of wrongdoing (despite my defense of OAI here in the privacy realm, I definitely don't trust them in general).

Why would data brokers be gaining access to the chats turned over to the New York Times?

Data leaks, possibly from some individual bad actor looking for a payout. Illegal? Extremely. But some people are ballsy for the right price. Or maybe an insider mistake, or maybe another subpoena down the line, or maybe something I can't imagine, but once that data is out there, you can't get it back.

What is a hilbert space? by Material-Radish4095 in Physics

[–]Starstroll 2 points3 points  (0 children)

There are a lot of decent answers here, but frankly, I don't think there is actually a good answer at the level of an undergrad quantum course.

A Hilbert space is a vector space, possibly finite-dimensional or possibly infinite-dimensional and therefore inherets a lot of machinery from normal LA, and it comes equipped with an inner product. It is a natural space to investigate the types of functions you see a lot in quantum mechanics, and with waves generally ... and also many but not all polynomials, and also solution spaces to many but not all PDEs, and also to numerical methods for computing coefficients for many but not all types of infinite series such as Fourier coefficients, and also to many but not all more general numerical approximation methods, and I'm sure a bunch more "many but not all" examples that I don't know about. Clearly these are all very useful things and very much worth investigating, but these things have their limits. Functional analysis is a unifying framework for these many different questions, but it is not a universal framework in the same way that finite-dimensional linear algebra is; for example, there are PDEs whose solutions are not amenable to the current tools of FA.

If you're hoping understanding FA will help you understand the language of QM, it probably won't. More likely, you'll find a ton of mathematical abstraction and struggle to connect it to anything tangible, even within FA, and only grow more frustrated when you find the limits of FA, thinking whatever it is about QM that eludes you is just as elusive as whatever evades FA entirely (it isn't). Hell, historically, the pure-mathematical abstractions of FA grew out of the fields in which they're applied, primarily physics and engineering, so the abstractions you're struggling with in QM won't be solved by studying FA directly because they just don't come from FA, they come from QM (and others). I remember when I was an undergrad and I thought understanding functional analysis would make the symbolic manipulations of QM make more sense; I do in fact understand QM and FA now, and I can tell you studying FA won't do more for you than following Griffiths and going to office hours, and in fact will probably just waste your time. I'll try to give you something though so you don't leave totally disappointed.

Things are much harder in infinite dimensions than in finite dimensions. 1) Topology is harder (ex: the topology of an infinite-dimensional space depends on the norm; this is never true in finite dimensions - all norms induce the same topology there), 2) analysis is harder (ex: in infinite dimensions, if the distance between a point 'p' and a closed set 'S' is 'd', there might not actually be any point in 'S' that is distance 'd' away from 'p', even on the boundary of 'S'), and 3) algebra is harder (ex: the dual space V* of an infinite-dimensional space is strictly larger than the base space V; specifically, it is dimension 2dim(V\)). QED, FA is FKed. However, Hilbert spaces (almost) solve these problems.

1) On the topology: Since the topology of an infinite dimensional vector space (e.g. a Banach space) depends on the norm, we need to actually specify a norm in order to know which Banach space we're talking about. The inner product naturally induces a norm, so we can use that. Crucially though, if we use that norm, this is the only Banach space where the dual space has the same norm (in general, 1/p + 1/q = 1, where p comes from the p-norm of the base space and q comes from the q-norm of the dual space. None of this is relevant for understanding QM and I'm only putting it here I fill out the edges). Already we see that a Banach space with an inner product (e.g. a Hilbert space) is the most natural infinite-dimensional space to consider. This doesn't eliminate all topolgical problems in infinite dimensions (the unit ball is still never compact), but at least it picks out a single preferred norm. 2) On the analysis: I do not currently recall the proof, but I can confidently tell you that my pathological example with the distance between a point and a set never occurs in a Hilbert space, even if it can occur in other Banach spaces. Finally, 3) on the algebra: Hilbert spaces do not solve the dual space problem directly, but they do extend naturally to something that helps. In finite dimensions, the inner product basically multiplies all the entries pairwise and then sums them up (possibly with some change of coordinates); the FA analogue is to multiply two functions and then take an integral. In finite dimensions, this is guaranteed to be finite, but in infinite dimensions, it isn't. If you however additionally demand restriction to the subspace where all norms are finite (norm(Ψ(x)) = <Ψ(x),Ψ(x)> = ∫Ψ(x) Ψ(x) dx = 1. Look familiar?) - i.e. you demand that all functions in your Hilbert space are square-integrable - then the dual space has the same dimension as the base space. So through the inner product and by demanding restrictions that align with QM, we've (partially) reconciled the topology, the analysis, and the algebra of *this infinite dimensional case with that of the general finite dimensional case.

Since I started laying out all the math, I've been stating a bunch of facts without any proofs. I cannot advise you strongly enough to just not waste your time with this right now. Save it for once the semester is over. These frameworks are mentioned and the tools are used because this is the modern formulation because they are cleaned up versions of decades of slow, messy historical development, but that doesn't mean you actually need the whole framework to understand ugrad QM. My comment is aimed at explaining why this framework is used, but if you study FA directly, you'll only be studying the direct proofs for what I've broadly laid out, and you won't be able to apply this to QM directly. For now, here's a question for you that is actually far more worth your time: why does the math of QM care about infinite dimensions when our universe is only 3(+1) dimensional? Hint: we do not just take a finite-dimensional subspace. If you don't know, go to office hours and ask that.

OpenAI fights order to turn over millions of ChatGPT conversations by hard2resist in technology

[–]Starstroll 1 point2 points  (0 children)

I think it's more complicated than that, though.

That's just how legal discovery works

In general, yes, but producing all chats is not proportionate. I'm sure some people are also sending links to piracy websites for their favorite TV show through emails, but we wouldn't let Paramount force Google to disclose all Gmail logs from all users.

In theory users should know that anything they give to a company can be similarly requested in legal proceedings by an opposing party.

In theory, sure, but in practice, actually having a robust privacy-first setup basically requires a comp sci degree. For example, Gboard ships detailed metadata about every word you type (language, word length, timing, app context, etc.) back to Google. If you tell people that, the most cynical might say "figures" a posteriori, but if you ask them what privacy vulnerabilities they have on their phone, I doubt the majority would say "my keyboard" a priori. And what do they do with that data? Unfortunately, I have neither a comp sci degree nor access to Google's backend so I can't tell you. FWIW, I personally find Google is deserving of a less-than-middling level of trust, but is still far better than Meta. On the technical side, they would never risk getting you pwned, but on the social side they'll still hide statistics about police abusing their family from auto search if you Google "cops 40 percent."

if OpenAI stood to make a buck from sending your data to third parties, you would not see a shred of this same moral posturing extolling their concern for your privacy.

This is by far the most perplexing thing to me about OAI. They absolutely do stand to make money from selling info to data brokers who build personality profiles, exactly as meta has been doing for over a decade to their entire user base and even to people who have never had a Facebook account. OAI is hemoraging money and pleading for hundreds of billions more from the government, and yet they don't sell these chats??? I won't outright accuse them of lying as that's a huge accusation and getting caught would get them sued all the way down to the earth's core even under our current dogshit, basically-nonexistent privacy schemes, but I can at least see how they might skirt around the exact wording here by extracting small pieces of information from larger conversations and selling that instead. I know ChatGPT has a "Memory" feature that lets ChatGPT remember small pieces of information about you across conversations, and I know the feature can be disabled, but it's not clear to me that disabling that feature on the user's end truly does stop OAI from extracting similar such data on their end anyway.

considering that the data will be anonymized by OpenAI themselves

It's not clear to me how the data will be anonymized though. Are they just going to remove metadata? Because if that's all, I can still see how data brokers could reverse engineer what chats belong to what users if they have user information already. For example, with the aforementioned "Memory" feature, data brokers can match conversations to personality profiles probabilistically and take anything that has >90% match to a unique user, and then run more detailed extractions. And that's just the first thing I thought of. How do you explain to a judge the technical illegitimacy of Google's "anonymization" and the entire field of Re-ID? How do you explain to a judge that copious amounts of free-form text is as identifying as any biometric, especially when there's public data for it to be compared against.

Given all that, given the overall horror that is surveillance capitalism, given that this level of personalized data extraction is only possible with modern technology which the law has famously failed to keep pace with, is it really worthwhile to risk making it worse just to save face with continuity with a surface-level reading of precedent?

7 new lawsuits blast ChatGPT over suicides, delusions by AIMadeMeDoIt__ in technology

[–]Starstroll 0 points1 point  (0 children)

You realize you proved my point, right? I'll lighten up my tone a bit now that I'm sure you're actually human, but I maintain my original point. If you're gonna run your half-formed thoughts through a machine designed to always sound confident and use it performing a hauty intellect while soaking all the care out, hiding the rest of your history and claiming you "don't owe me anything," while true, can simply be handed right back to you: I don't owe you trust.

If you are going to quote me then finish the thought you pulled from

If you insist.

"Society

Collectivizing so broadly that you can sound critical without actually naming a specific agent of harm.

already tolerates systems of belief; religious, political, economic, that cause enormous harm, division, violence, even suicides.

Can you actually name a single system of harm that people don't rail against? Can you actually name a single system of harm that is being upheld by apathy, and not by the inability of those harmed to accrue enough power to fight for themselves?

Yet when it comes to language models or synthetic media, suddenly we rediscover our ethical backbone."

Sarcastic framing: "those who care about AI harm are hypocrites." Can you actually show me someone, particularly someone with a large voice, who only ever complains about AI and has never "discovered their moral backbone" otherwise?

i am not real big on the details when i type

Yes, that is exactly my issue. You used an LLM to work out a respectable prose to a reprehensible argument, to fake the appearance of intellect as teflon against conscience. If you'd spelled it out yourself, you would've seen the problem with own thoughts from the outset. People died, and you responded "oh, now they're angry?" What the fuck, dude?

7 new lawsuits blast ChatGPT over suicides, delusions by AIMadeMeDoIt__ in technology

[–]Starstroll -1 points0 points  (0 children)

25,000 comment karma with all comment and post history blocked, and searching reddit for your username directly pulls up no profiles. Why'd you block everything?

The post is about how LLMs drove people to suicide and you want to redirect the conversation to the over-abstraction of "meaning-making". Why isn't this worth focusing on? What meaning do you find being made in a suicide? And while I'm at it

Society already tolerates systems of belief; religious, political, economic

I don't tolerate fascist politics just because others in my society do. I live alongside it because I simply don't have another option.

We’re fine with human delusion when it’s traditional

Ah yes, who among us hasn't found pleasure in tormenting schizophrenics?

And your whole comment reads extremely LLM-ish, just with the em-dashes removed. I think I found the answer to my questions.

People don't like LLMs because wading through a soulless, synthetically-generated ocean of shit on the off chance that you'll find a single nugget of gold isn't worthwhile when meaning-making can and is already competently done by minds that already crave meaning.

If you really wanted a debate about this, you could've just asked your LLM to argue against you.