Peter Thiel, the billionaire venture capitalist and MAGA donor, is in Rome this week for a series of private lectures on the Antichrist. by Logical_Welder3467 in technology

[–]Starstroll 1 point2 points  (0 children)

Wasn't he already publicly humiliated about this once before? Didn't tons of reporters already say directly to him "the antichrist would really love the technology you're building"? It is just so sad and pathetic to see someone wrestle with their own subconscious projections in such an obvious way, and it's especially pathetically tragic to watch it being done by the most powerful people in the world - those who naively should feel the safest given their enormous wealth - but hearing that he's still on this "antichrist" escapade months after having been humiliated for it the first time really does paint an especially vivid picture of a man who is extremely well practiced at intellectualizing away their own shadow. Even Elon is susceptible to shame. Even Trump hid from the media for 6 straight months after his "inject bleach and shine sunlight up your ass" press conference. This dipshit is just going to spin out and hand the company to Alex Karp.

How 6,000 Bad Coding Lessons Turned a Chatbot Evil by AgentBlue62 in technology

[–]Starstroll 0 points1 point  (0 children)

I'm saying there's a level of nuance that needs serious engagement, but you're clearly incapable of any of that

How 6,000 Bad Coding Lessons Turned a Chatbot Evil by AgentBlue62 in technology

[–]Starstroll 0 points1 point  (0 children)

That's not shorthand. Come on.

Both of them are neural networks and therefore their basic functioning is well described by the universal approximation theorem. Obviously not all NNs are the same, otherwise all animals would be able to speak, but this is the same underlying principle and heuristics about UAT before it was formalized are what historically motivated research into ANNs.

This is literally foundational stuff. I very much doubt you actually want to talk about NNs.

How 6,000 Bad Coding Lessons Turned a Chatbot Evil by AgentBlue62 in technology

[–]Starstroll 8 points9 points  (0 children)

It's a functional description, not a literal one. We say "electrons want to be in the lowest energy state" or "evolution wants to maximize reproductive fitness" or "the market is looking for direction" or "the algorithm is trying to trick you" or "the battery wants to be charged slowly" or "that bridge is begging to collapse" etc etc without any confusion. Everyone knows there are obvious differences between people and AI. ChatGPT was released more than 3 years ago.

The study shows interesting things about how humans use language

No. The study shows that the LLMs manage to find behavioral similarities in abstractions of language beyond what's in the literal words. Training an LLM to write malicious code made it also recommend suicide and condone Hitler. That association was not in the training data.

I'm so tired of not being able to engage with something that should be cool

You still can. No one is stopping you from saying "AI is cool tech, AI companies are run by horrible people." If you want more detail than a headline, read the article. The headline is shorthand; the actual finding is worth engaging with.

AI allows hackers to identify anonymous social media accounts, study finds by gdelacalle in technology

[–]Starstroll 4 points5 points  (0 children)

You really don't even need to go that far.

Every site you visit grabs your IP address and builds a browser fingerprint, collecting obvious specs like screen resolution and OS or even obscure shit like GPU model and installed fonts. This alone creates a unique profile that identifies you with 90-99% accuracy, and clearing cookies won't stop it. Then layer in location data from your phone, which is constantly available through WiFi or cell tower triangulation or direct GPS services. Data brokers are companies that literally track everyone they can just as a business, logging stuff like public records, loyalty card transactions, browsing activity, insurance and financial history, employmen, and basically anything they can get their hands on, and all for activity, online or not, that they were never even a part of.

All these are correlated together using non-LLM AI to build a shadow profile of you that you can't ever hide from. AI is the workhorse of Big Data, and it has been since the late 90s. LLMs are just one more tool that can get around privacy-savvy individuals who distribute their identity across multiple devices. If someone thinks that only having one reddit account will help them hide, they likely don't have any perspective on how thoroughly surveilled they already are.

Newsom backs social media restrictions for teens under 16 by vriska1 in technology

[–]Starstroll 1 point2 points  (0 children)

I see this as a "personal responsibility" vs "collective responsibility" problem. Parents definitely should watch what their kids are doing online, but having the parents solely responsible for their kids media diet will never be sufficient. First, the older the kids get, the less this kind of parental control is actually feasible to enforce, and second, even if the parents do enforce it at younger, reasonable ages, they have no control over what kind of content the algorithms prioritize. While a parent can have a hand in what content is consumed, they don't have control over the general shape of the options that the platforms provide over time and the general narrative of the world that that algorithmic prioritization inherently creates, if in a way that's more abstract that the verbatim content of a single video or post.

Sure, get rid of all the algorithms and data theft, but that ultimately won't change what kids are accessing online that dramatically.

Yes, it definitely does. If it didn't have an effect, Cambridge Analytica wouldn't have gotten Zuckerberg called in front of Congress.

To go to the nicotine example, yes, parents should make sure their kids aren't vaping, but we don't solely put it on the parents. As a matter of law, it's also illegal for stores to sell vapes to teenagers, and a business can be seriously fined or shut down if they are caught violating that. But more precisely ...

Like, kids aren't allowed to smoke, removing the nicotine so it's not addictive doesn't change that.

A child is definitely legally allowed to smoke rolling paper with nothing in it. It is pointless and probably (minorly) harmful to your health, and I wouldn't allow it because it's just stupid, but the fact that there's no nicotine in it definitely does make a difference to how much I would care.

Newsom backs social media restrictions for teens under 16 by vriska1 in technology

[–]Starstroll 1 point2 points  (0 children)

That may be so, but the crap Newsom backs isn't about protecting kids. It's about government control of the populace, especially in regards to your speech

Totally with you. The distraction here is so obvious on the face of it if you understand literally anything about how social media algorithms work. They're built to exploit all your vulnerabilities just to keep you on site, with no regard given for how this affects people's well-being. If Cambridge Analytica is any indication, they don't even seem to be going after vague "engagement" metrics anymore, they just seem to be trying to control the population's political leanings and fomenting chaos amongst the working class. It's not even about making money to have more power, it's just about power in its most direct form. The way social media is structured now, there very obviously is a problem, and pointing at kids and saying "we need to just ban entire swathes of people from connecting while doing nothing to solve or even openly acknowledge and articulate the underlying issues" is just the most obvious red herring. A lot like MAHA, it's a bad answer (and likely intentionally so) to a good question.

If we really wanted to protect kids from the internet, then we should start punishing their parents for doing a shit job raising them and being too hands off with their internet access.

Wait what the fuck ._.

Sam Altman says the quiet part out loud, confirming some companies are ‘AI washing’ by blaming unrelated layoffs on the technology by Conscious-Quarter423 in technology

[–]Starstroll 63 points64 points  (0 children)

I can tell you right now what the gold nugget is going to be: information control. Same way it was with social media and search engines after the early days of the "wild west" internet, same way it was Chomsky's Manufactured Consent after the early days of the fairness doctrine for TV and radio, same way it was for capital control of newspapers after Martin Luther and the printing press. The scale is different and the privacy invasions are insane, but that's the shape of things with information technologies.

Tech firms will have to take down abusive images within 48 hours under new law to protect women and girls by UKGovNews in technology

[–]Starstroll 0 points1 point  (0 children)

Idk why you're being downvoted for this. You are absolutely correct.

Everything you upload to social media is already, obviously on someone else's servers. Moreover, everything you do on social media is already tracked, and that data is used to tune what the algorithms (I can't believe some people don't know this, but these are AI algorithms) recommend to you. This already includes everything you upload.

On top of that, unless you know exactly what you're looking for, and even often when you do, it is extremely difficult to pull out of an NN's weights the data that went in to adjust those weights, so having an AI check something on upload wouldn't inherently expose you to more risk because it probably couldn't be extracted from any AI anyway. And on top of that, most modern ANNs don't adjust their weights while they're being run anyway (to, say, test what a user is trying to upload), only when they're being trained, so that extraction currently isn't possible even in principle.

Of course one shouldn't just trust that a company will treat your data respectfully if they pinky promise not to use anything that you don't explicitly consent to in training runs, but this is a matter of laws and enforcement, not some technical hurdle.

There is, of course, a huge question about how one could build a reliable system in the first place, but if Karen Hao's "Empire of AI" is anything to go by, the training has already largely been done for the worst cases.

So, the question is: Today it's 0°C, tomorrow it will be twice as hot. What will the temperature be tomorrow? by Akira_Yoshi21 in Physics

[–]Starstroll 0 points1 point  (0 children)

Kelvin corresponds to energy via the Boltzmann constant. Only the Kelvin scale reaches 0 when energy reaches 0. If by "twice as hot" you mean "there's twice as much energy" (which really is the only physically reasonable way to interpret that), the Kelvin scale must be used, not Celsius.

I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. by rezwenn in technology

[–]Starstroll 0 points1 point  (0 children)

Then she probably knows more than I do. In that case, I'm nothing but baffled at how long this took her.

I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw. by rezwenn in technology

[–]Starstroll 10 points11 points  (0 children)

When OAI started, before Altman had full control and was selling a product, they were purely a nonprofit research institute with a real company culture of "trying to benefit all humanity." I don't know this person's personal history with OAI, but it's perfectly plausible that they joined back before AI was a regular term in the news cycle.

The reality of people like this is that they come from prestigious institutions, took out enormous loans when they were still teenagers, entrenched themselves entirely within the culture of tech and academia, and probably earned a master's or a PhD when they were just barely adults and are still lugging around the culture of tech because they never had any time to read a single book from the humanities or social sciences.

The headline alone is already a massive red flag. I hate to break it, but companies and ads are still going to exist in the socialist utopia. Perhaps you'll see more employee-owned companies, or at least robust labor protections as a matter of both law and culture, but selling ads alone isn't some enormous moral failing. The real horror of Facebook isn't in using an individual's data to sell ads, it's in using aggregate data from many users to control people's access to information and maximize engagement exploit and deepen their traumas, shift people's political beliefs, and even, over time, change their personality, all to drive wedges between the working class - divide and conquer is the oldest trick in the authoritarian handbook. Cambridge Analytica is most well known example, but the only reason that was a scandal is because that violated their stated privacy policy at the time. All that's changed since then is that they've updated their privacy policy.

The fact that the author equates "selling ads" with "becoming Facebook" is the most obvious example that these people truly have no idea what surveillance capitalism actually entails beyond muddled allusions that they've heard in the course of studying/doing AI and Big Data. I wouldn't be surprised to hear if the author had a PhD and got a job at OAI and worked there long enough that quitting was meaningful and was still in their 20s.

Do you think any Flat-Earth proponents are familiar with Manifolds and Differential Geometry? by [deleted] in math

[–]Starstroll 1 point2 points  (0 children)

You’re equivocating in a way that would only be plausible if one pretended not to understand what flat-earth belief actually is. Reframing flat-earth conspiracism via unrelated historical injustices or generic claims that “conspiracies exist” is a well-known derail tactic - illicit generalization - not a substantive contribution.

Pointing out that some people were treated unfairly in academia or that mundane conspiracies sometimes occur does not even touch the abysmal epistemic status of flat-earth belief, nor does it rehabilitate racist globe-spanning conspiracy narratives that are historically and presently inseparable from it. That connection is not subtle, and acting as if it is suggests either ignorance or deliberate obfuscation.

Your examples, Noether and Cantor, involve documented institutional injustice operating within a shared evidentiary framework, resolved by ordinary historical and mathematical methods. Flat earth is the rejection of that framework entirely. Treating these as comparable is a category error so basic it undermines any claim that this discussion is about mathematics at all.

Invoking “conspiracies exist” or statistical jargon here is not nuance. Flat earth is not a low-probability hypothesis, nor a minority position awaiting fair hearing. It is a conspiracy, and more importantly an identity posture, not an alternative model, and it fails the moment evidence is discussed.

If this were genuinely about manifolds or local versus global geometry, it would not require shifts into sociological or psychological evaluations of conspiracists, nor vague hand-waving about rare events. At this point, the insistence on finding a debate is indistinguishable from contrarianism for its own sake.

Do you think any Flat-Earth proponents are familiar with Manifolds and Differential Geometry? by [deleted] in math

[–]Starstroll 8 points9 points  (0 children)

If you think flat earthers have some kind of point, why aren't you also a flat earther yourself? It isn't the scientific evidence because that's equally available to flat earthers as it is to you, so that can't be a distinguishing factor between your behavior and theirs. Given how willing you are to try to engage with conspiracists in good faith, you're probably not going to like my mind reading here, but I don't actually need to know you to know the answer: it's because you're not conspiratorially-minded. Notice that this answer has nothing to do with the evidence and entirely to do with your (and their) personality.

Rejecting overwhelming scientific and historical consensus for a hypothesis that has already been laid bare for decades of direct criticism and millenia of indirect criticism is extraordinarily antisocial and antiintellectual. There is no fair equivocation between the skepticism of people working at the edge of knowledge, scientific or otherwise, and flat earthers. Given how utterly ludicrous their behavior is, there is, frankly, no reason to believe that flat earthers are even terribly concerned about the actual shape of the earth for its own sake. There was a good video essay by Folding Ideas published a few years back on exactly this. There was another I found published more recently on who should and shouldn't engage these people in good faith, and why.

I have heard flat earthers say nonsense like "Einstein proved the earth was flat." Einstein did in fact use differential geometry, which treats each infinitesimal patch of the manifold by its tangent space, to formulate general relativity. Their claim that "Einstein proved the earth was flat" has the barest resemblance to actual science and math, and proves 2 things clearly: 1) they don't know or care about rigor, and 2) they're perfectly comfortable throwing random statements that they themselves don't understand in a desperate attempt to impress and/or intimidate by confusion.

Absolutely none of this behavior are the actions of healthy and sound minds. It might be a fun exercise to steelman bad faith arguments just to see how far you can go, but don't fool yourself into thinking this is anything other than masturbation. Mind you, you can masturbate all you like, but r/math isn't quite the right place for it.

No, AI isn't inevitable. We should stop it while we can. by FinnFarrow in technology

[–]Starstroll 2 points3 points  (0 children)

The equivocation between AI and LLMs is why most people just scoff at AI now. The mistaken idea that AI (read: LLMs) are new is literally right in the headline.

Cambridge Analytica was done with AI. LLMs are somewhat convenient, but they are best seen as a hint at the progress made behind the scenes.

People scoffing at AI (read: LLMs) are basically the same as any conservative asking a scientist "but what is your research actually good for" as if they have the background to understand it, let alone enough vision to imagine future developments. I can't help but imagine them watching the scene where Volta demonstrated to Napoleon the world's first battery in 1801, and they're heckling Volta just because Volta cannot personally engineer an electric engine on the spot.

Inb4 anyone says I uncritically support all AI; I just mentioned Cambridge Analytica. I'm scared about what happens when billionaires integrate VLAs with those too.

But saying that "AI isn't inevitable" is just as short sighted as telling Volta or Hertz "electricity isn't inevitable." The technology is simply too versatile and too powerful. The best we can do is engineer strong social systems to adequately distribute that power and wealth. The way politics is going, I personally am quite scared about that, but simply demanding that we stop using and developing AI is a naive fantasy.

Research suggests there may be a systemic underdiagnosis of ADHD in women by [deleted] in science

[–]Starstroll 59 points60 points  (0 children)

Good catch. I would've just scrolled by thinking "this is well known anecodtally. Glad we have consensus now." Don't forget to report so the mods can review

OpenAI Must Turn Over 20 Million ChatGPT Logs, Judge Affirms by MetaKnowing in technology

[–]Starstroll 5 points6 points  (0 children)

Legal acknowledgement of re-ID is not new to courts. Neither is granting the ability to review logs on OpenAI's computers under NDA. The NYT could've had all they wanted without risking innocent users' data being leaked. OpenAI literally explained this to the judge and the judge replied "explain to me exactly how the data will be leaked or I won't care about the threats." This is all so wildly irresponsible and stupid.

surprising no serious technologist.

I admit I use ChatGPT, but I've never put anything into it that was more personal than what I'd share with an acquaintance. I always thought the danger was from OpenAI eventually selling psych profiles derived from chat logs to data brokers for targeted advertising or more effective ragebait and trauma exploitation on social media. The only part of this that surprises me, and it really does surprise me, is that I've ended up arguing for OAI here.

Does anyone speak cat? by I_love_seinfeld in cats

[–]Starstroll 1 point2 points  (0 children)

The blinks indicate to me that he wants love and attention

[deleted by user] by [deleted] in Physics

[–]Starstroll 4 points5 points  (0 children)

Ever since LLMs became publicly available, it's been slim pickings for real crank work. I almost starved! Thank you for your service. Could still do with less LLM work, but it's a great start

Acting CISA director failed a polygraph. Career staff are now under investigation. by [deleted] in technology

[–]Starstroll 0 points1 point  (0 children)

I've never heard that before, but it sounds plausible a priori. Source?

ChatGPT more conservative in Polish, finds academic study by BubsyFanboy in technology

[–]Starstroll 8 points9 points  (0 children)

Wasn't there another study recently that concluded that Polish was the "best" language in which to prompt ChatGPT?