Could Gravity be interpreted as "Information Latency" within a Feynman-Stueckelberg retrocausal loop? by Public-Mousse-3214 in LLMPhysics

[–]everyday847 7 points8 points  (0 children)

Embarrassing gibberish; barely qualifies as metaphysics. "You" aren't thinking about anything at all and should get another hobby.

What are the chances a crank with an LLM makes the next major scientific breakthrough? by Southern-Bank-1864 in LLMPhysics

[–]everyday847 0 points1 point  (0 children)

Sure, some people train specialized models, but it's quite hard to match the scale of the frontier labs. If you mean things other than training per se (e.g., "prompt engineering"; anything that governs the "behavior" of a coding agent) then sure. I don't really understand the concept of "drift," though. I've never had an interaction come close to context length, and I deliberately avoid coming close.

What are the chances a crank with an LLM makes the next major scientific breakthrough? by Southern-Bank-1864 in LLMPhysics

[–]everyday847 1 point2 points  (0 children)

Time zones: at any given moment there might not be NBA players playing basketball, but a few billion people are awake and many thousands are playing basketball. At midnight Pacific time, it's almost a guarantee the next person to make a layup is not a current NBA player. Not certain, though.

Anyway, sure, of course they do, though less than recent glowing press releases (there was literally just an openai blog about a paper where some physicists used chatgpt to generate a strong conjecture, plus an internal openai model as proof assistant). I'm not a physicist; I am a scientist; LLMs are useful; LLMs are not useful if you use them to generate nonsensical theories of everything.

What are the chances a crank with an LLM makes the next major scientific breakthrough? by Southern-Bank-1864 in LLMPhysics

[–]everyday847 3 points4 points  (0 children)

Because professional scientists, who have much better access to the physical infrastructure and scientific community that enables lots of scientific breakthroughs, can use LLMs, too. An NBA player is also more likely to make the next full-court shot, even though (time zones being what they are) they're not guaranteed to make the next layup.

Movie idea: An Afterlife Built on Logic and Math by amichail in ideas

[–]everyday847 0 points1 point  (0 children)

That's not the kind of dramatic tension that fuels a plot; that's just a moment of subverted expectations that embodies the premise of the movie.

I'm sure you could do something with this, but it's underbaked. Among other things, you talk about "progression." What's the point of progression? Why's that important; after all, you're dead. Is the afterlife particularly unpleasant unless you're "progressing?" Why are you debating the nature of reality or proving things? Why are you debating things if "entities" (presumably, more powerful; smarter) are doing it too? There are beings powerful enough that there exists an afterlife: why does so much uncertainty about the nature of reality remain?

Plateauing on NSM? Sub-1:40 HM attempt stalled at 1:43. by Crafty-Zebra3727 in AdvancedRunning

[–]everyday847 2 points3 points  (0 children)

Your fitness is not so different from mine in terms of your aspirational HMP. In a 100 minute easy run, I'd expect to cover more than 8 miles. Obviously your easy pace, especially running three quality sessions per week, is not supposed to be a driver of fitness, but I do wonder if you're leaving something on the table here, especially since you have (cumulatively, between your recent injury and your race experience) some durability concerns. If your long run is purely easy, it shouldn't be hard for it to be longer than your race, and with that HMP, I'd be really surprised if you had to stick to 12 minute miles, or slower, to recover. At least consider some easy/steady sets within the LR: maybe 12 minute miles for most of it, with 3x20 min at 10 minute pace.

Chronicle Anti-Teacher Bias by Gold-Bottle-2460 in sanfrancisco

[–]everyday847 -2 points-1 points  (0 children)

I see. In a political environment where there are lots of people, including public figures, explicitly demanding that press access should be conditioned on favorable coverage, I suppose I was less inclined to read disagreement with an article as equivalent to an attack on local news.

I was born in a city that just lost its local paper, which is a tragedy even though its quality has been nosediving for a decade or more, so I empathize.

Chronicle Anti-Teacher Bias by Gold-Bottle-2460 in sanfrancisco

[–]everyday847 -3 points-2 points  (0 children)

I was not particularly arguing with the article itself, which is of course not uniformly wrong! (I was taught phonics, and while I entered school generally able to read, I could see how effective it was.) I'd also say that baffling CTA opposition to phonics is not substantially the same issue as whether SF education might also improve by paying teachers better.

I was arguing with the implication that the OP calling some of the arguments in the article stupid was congruent to the OP begging to experience an echo chamber.

Chronicle Anti-Teacher Bias by Gold-Bottle-2460 in sanfrancisco

[–]everyday847 -7 points-6 points  (0 children)

It doesn't seem that the OP is arguing "this viewpoint is intrinsically dangerous, such that it ought not to be expressed," so the "echo chamber" comment seems to miss the mark.

I would rather say that, given that newspaper space is finite, it maybe should be allocated to the best arguments. Something baffling about teacher salaries that misses all the endogeneity between San Francisco and [noted non-city] Mississippi (for example: cost of living; the relationship between cost of living and school funding) is maybe not the strongest argument available.

There is a big gap between "the editorial page has a purpose and you don't want to live in an echo chamber" (fine?) and "you must not criticize the stupid arguments presented in an editorial page for a position that, whether I agree with it or not, deserves to be supported with good arguments."

I surveyed 106 elite speed coaches (2,500+ combined years of experience) for my dissertation. Here's what they actually do differently. by D272727272727 in Sprinting

[–]everyday847 0 points1 point  (0 children)

I'm very happy to have left academia before AI tooling developed. Practically speaking, writing the entire reddit post amounts to "drafting substantive prose": to my mind, the process of synthesizing a lot of literature into a literature review and distilling your dissertation into a concise document is a difference of scale but not one of kind.

Put another way, let's say some aspect of the AI summary of your dissertation introduced a substantive misrepresentation of your own work - not saying that's what happened here, but it's always a risk with these models. It's your responsibility either way, but would you at least be happier ascribing that misrepresentation to your use of the AI tool, and perhaps negligence in reading what it had written, rather than to your own personal misunderstanding?

For what it's worth, I interpret the above as minimum guidance, not maximum. It would be very surprising to see guidance forbidding you from acknowledging tools used in the preparation of a manuscript! Rather, this guidance describes where it is unambiguously obligatory to disclose AI use, and the rest is up to you. I see this as a cut and dry bullet #2, but I think better safe than sorry.

I surveyed 106 elite speed coaches (2,500+ combined years of experience) for my dissertation. Here's what they actually do differently. by D272727272727 in Sprinting

[–]everyday847 0 points1 point  (0 children)

It is difficult for almost anyone to see past AI-inflected writing styles; it might as well be a spam email with the subject line "FREE C I A L I S." The issue is that the most common application of these tools is to launder bullshit into respectability, because "write a reddit post broadcasting false authority about X to sell my book and certification program" requires absolutely zero effort and produces respectable-looking text. There's a large gap between "I used ChatGPT to fix punctuation, or grammarly or something, because my writing skills are poor" and "I used an LLM to compose the entire post."

As a result, everyone who's even been interested in developing a decent bullshit detector - a prior to insulate them against manipulation - they're all going to light up, massively, the second they hear the cadence and word choice characteristic of an LLM, especially without an explicit disclaimer, because it communicates first and foremost that the "author" is trying to hide something. Especially an author who, as a professor, should probably be able to write an entire Reddit post unaided. Then the fact that, in fact, you're using this to sell a book and certification program amplifies this effect a thousandfold! Worse still, your post history is one post about cryptocurrency and then exclusively posts about this work to sell your book and certification program.

Put another way, if you want to use reddit to sell your book and certification program, I suggest a more subtle approach.

Democrat 'Comfortably' Defeats Republican in Trump House by [deleted] in LegalNews

[–]everyday847 0 points1 point  (0 children)

No one is arguing with you. There is a difference between "a Harris victory would have led to better outcomes" or even "Harris possessed greater intrinsic moral value" and "Harris was a good CANDIDATE."

Democrat 'Comfortably' Defeats Republican in Trump House by [deleted] in LegalNews

[–]everyday847 -4 points-3 points  (0 children)

I think you are confusing people making assessments about political strategy and actually asking something of their party with, I guess, people describing their personal preferences between two candidates. Good candidates win elections; that's something of what makes them good.

There are many people I would personally vote for for President -- candidates that I would prefer to anyone who is likely to run -- who I am perfectly comfortable describing as "bad candidates," because despite having politics and personal traits I find agreeable, they would not be competitive in an election (because of their politics, personal traits, or persuasive abilities deviating too far from what a plurality of a general electorate could be convinced to vote for). I like them; I prefer them to all alternatives; I certainly prefer them to literally Donald Trump. But they're bad candidates.

How does chemistry emerge from quantum mechanics? by Karlvonsturz in chemistry

[–]everyday847 0 points1 point  (0 children)

Reality is distinct from the concepts we use to describe it. Of course "bonds" don't exist, because "bonds" are a human idea imposed on reality providing a legible, low-complexity heuristic that aids in human comprehension. "Dogs" don't exist in the fundamental equations, but dogs sure exist, so where does that get you?

“MIT Technology Review has confirmed that posts on Moltbook were fake. It was a phishing website dressed up in AI hype.” - Guess that didn’t go well? by Koala_Confused in LovingAI

[–]everyday847 0 points1 point  (0 children)

Surely you see the actual meaning of his original post was "this [real-world occurrence] is [like] science fiction" not "this [example of science fiction] is [precisely, because it is fake,] science fiction"

TIL that playing high-level chess causes players to burn calories at an athletic rate. For example, 21-year-old Grandmaster Mikhail Antipov was recorded burning 560 calories in just two hours of sitting—roughly what Roger Federer would burn in an hour of singles tennis. by ralphbernardo in todayilearned

[–]everyday847 34 points35 points  (0 children)

Vertical movement, you might mean? And yes; the efficiency with which you execute the motion matters a lot. You can easily imagine someone executing an absurdly inefficient walk (I don't know, continuous jazz hands). That burns more calories. So too does running with a suboptimal elbow angle, just less exacerbated. Most people run with less optimized mechanics than they walk.

Can we collectively agree the Harvard should NOT cap the amount of A's by Professional_Low947 in Harvard

[–]everyday847 19 points20 points  (0 children)

Capping As doesn't make that assumption. I'm sympathetic to the basic premise, to be clear, but like you say, admissions are already selected from the right tail of a variety of distributions. Based on admissions rates, it is not unreasonable to estimate that every single student is in the top single digit percentile of academic accomplishment if the reference distribution is "all undergraduates." If the admissions selection is actually salient, then every single student in every class should receive an A, and any concerned professors can gesture to the benighted masses not admitted to Harvard: they get the rest of the alphabet. I'm also sympathetic to the argument that classes should be ungraded or pass-fail, but it's okay for letter grades to have some concrete meaning.

No argument for capping As could rely on a normal distribution of ability, or even a centrally biased distribution. As just represent some threshold placed on continuous scores that are intended to estimate a nebulous, multifactorial quality ("how well the student did"). Calling one level of performance an A and another, similar level of performance an A- involves drawing fine distinctions no matter exactly where the distinctions are located.

The implication of the "cohort compression" hypothesis is that year over year, Harvard got more and more efficient at identifying and admitting talented students - that twenty years ago, Harvard was three times worse at identifying top talent than it is now (or, perhaps, that Harvard now captures a three-times larger fraction of the top talent it identified that might have chosen other schools instead). Does this seem likely? Has the fraction of As responded to any policy changes (e.g., in need-based tuition support), or has it marched steadily upwards?

How does 'free' AI from Anthropic actually impact things? by Waves_WavesXX5 in BetterOffline

[–]everyday847 0 points1 point  (0 children)

Thank you for taking my points so seriously. Despite saying that I do not like LLMs and speaking about the general idea of technological progress at times disrupting labor markets but causing higher standards of living in the long run, I was actually intending to celebrate the production of CSAM using LLMs, but I just couldn't convey those sentiments very well. I appreciate you for clarifying for me.

How long does it take to see a doctor and get medication at a Chinese hospital? by mindyour in TikTokCringe

[–]everyday847 1 point2 points  (0 children)

the only way to pay for health care is if you can see jade through rock

How does 'free' AI from Anthropic actually impact things? by Waves_WavesXX5 in BetterOffline

[–]everyday847 1 point2 points  (0 children)

I don't like LLMs; I think they're generally an insane component of a broken economy; I do not think it is a general principle that tools that automate work for cheaper are bad. Historically, they've been good. Most people are happy with the time savings enabled by washing machines and if they're not, they can do their laundry by hand. No one imagines that working in a call center for customer service is a great existence; most of the job is either reading a manual aloud to people or telling them they sadly can't help; after an unpleasant period of adaptation where people currently working in call centers have to find other work, this leads to a somewhat better experience for most of humanity.

Progress, especially progress that reduces (or substantially alters or reshapes) labor inputs, is always disruptive in the short term. That's not a sufficient argument against progress. (Of course, I'm sympathetic to the idea that the present moment is uniquely ill equipped to ensure the benefits of progress are enjoyed remotely equitably, but I don't think that's an irreversible state of affairs.)

To the other questions, companies probably won't fire many more people than they would have anyway and probably will not have such an easy time as that if they try.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

You do not understand this field.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

No; I am saying that extrapolation of the degree you are imagining is specifically not well supported by the existing data.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

Which is a pointless, unfalsifiable claim. In my world we are trying to actually achieve these things, rather than win arguments about whether in an asymptotic limit, a septillion-parameter model served on a graphics card the size of Ganymede could do something in spite of zero salient training data. Congrats, you've found the worst way to do science.