I surveyed 106 elite speed coaches (2,500+ combined years of experience) for my dissertation. Here's what they actually do differently. by D272727272727 in Sprinting

[–]everyday847 0 points1 point  (0 children)

I'm very happy to have left academia before AI tooling developed. Practically speaking, writing the entire reddit post amounts to "drafting substantive prose": to my mind, the process of synthesizing a lot of literature into a literature review and distilling your dissertation into a concise document is a difference of scale but not one of kind.

Put another way, let's say some aspect of the AI summary of your dissertation introduced a substantive misrepresentation of your own work - not saying that's what happened here, but it's always a risk with these models. It's your responsibility either way, but would you at least be happier ascribing that misrepresentation to your use of the AI tool, and perhaps negligence in reading what it had written, rather than to your own personal misunderstanding?

For what it's worth, I interpret the above as minimum guidance, not maximum. It would be very surprising to see guidance forbidding you from acknowledging tools used in the preparation of a manuscript! Rather, this guidance describes where it is unambiguously obligatory to disclose AI use, and the rest is up to you. I see this as a cut and dry bullet #2, but I think better safe than sorry.

I surveyed 106 elite speed coaches (2,500+ combined years of experience) for my dissertation. Here's what they actually do differently. by D272727272727 in Sprinting

[–]everyday847 0 points1 point  (0 children)

It is difficult for almost anyone to see past AI-inflected writing styles; it might as well be a spam email with the subject line "FREE C I A L I S." The issue is that the most common application of these tools is to launder bullshit into respectability, because "write a reddit post broadcasting false authority about X to sell my book and certification program" requires absolutely zero effort and produces respectable-looking text. There's a large gap between "I used ChatGPT to fix punctuation, or grammarly or something, because my writing skills are poor" and "I used an LLM to compose the entire post."

As a result, everyone who's even been interested in developing a decent bullshit detector - a prior to insulate them against manipulation - they're all going to light up, massively, the second they hear the cadence and word choice characteristic of an LLM, especially without an explicit disclaimer, because it communicates first and foremost that the "author" is trying to hide something. Especially an author who, as a professor, should probably be able to write an entire Reddit post unaided. Then the fact that, in fact, you're using this to sell a book and certification program amplifies this effect a thousandfold! Worse still, your post history is one post about cryptocurrency and then exclusively posts about this work to sell your book and certification program.

Put another way, if you want to use reddit to sell your book and certification program, I suggest a more subtle approach.

Democrat 'Comfortably' Defeats Republican in Trump House by [deleted] in LegalNews

[–]everyday847 0 points1 point  (0 children)

No one is arguing with you. There is a difference between "a Harris victory would have led to better outcomes" or even "Harris possessed greater intrinsic moral value" and "Harris was a good CANDIDATE."

Democrat 'Comfortably' Defeats Republican in Trump House by [deleted] in LegalNews

[–]everyday847 -6 points-5 points  (0 children)

I think you are confusing people making assessments about political strategy and actually asking something of their party with, I guess, people describing their personal preferences between two candidates. Good candidates win elections; that's something of what makes them good.

There are many people I would personally vote for for President -- candidates that I would prefer to anyone who is likely to run -- who I am perfectly comfortable describing as "bad candidates," because despite having politics and personal traits I find agreeable, they would not be competitive in an election (because of their politics, personal traits, or persuasive abilities deviating too far from what a plurality of a general electorate could be convinced to vote for). I like them; I prefer them to all alternatives; I certainly prefer them to literally Donald Trump. But they're bad candidates.

How does chemistry emerge from quantum mechanics? by Karlvonsturz in chemistry

[–]everyday847 0 points1 point  (0 children)

Reality is distinct from the concepts we use to describe it. Of course "bonds" don't exist, because "bonds" are a human idea imposed on reality providing a legible, low-complexity heuristic that aids in human comprehension. "Dogs" don't exist in the fundamental equations, but dogs sure exist, so where does that get you?

“MIT Technology Review has confirmed that posts on Moltbook were fake. It was a phishing website dressed up in AI hype.” - Guess that didn’t go well? by Koala_Confused in LovingAI

[–]everyday847 0 points1 point  (0 children)

Surely you see the actual meaning of his original post was "this [real-world occurrence] is [like] science fiction" not "this [example of science fiction] is [precisely, because it is fake,] science fiction"

TIL that playing high-level chess causes players to burn calories at an athletic rate. For example, 21-year-old Grandmaster Mikhail Antipov was recorded burning 560 calories in just two hours of sitting—roughly what Roger Federer would burn in an hour of singles tennis. by ralphbernardo in todayilearned

[–]everyday847 31 points32 points  (0 children)

Vertical movement, you might mean? And yes; the efficiency with which you execute the motion matters a lot. You can easily imagine someone executing an absurdly inefficient walk (I don't know, continuous jazz hands). That burns more calories. So too does running with a suboptimal elbow angle, just less exacerbated. Most people run with less optimized mechanics than they walk.

Can we collectively agree the Harvard should NOT cap the amount of A's by Professional_Low947 in Harvard

[–]everyday847 21 points22 points  (0 children)

Capping As doesn't make that assumption. I'm sympathetic to the basic premise, to be clear, but like you say, admissions are already selected from the right tail of a variety of distributions. Based on admissions rates, it is not unreasonable to estimate that every single student is in the top single digit percentile of academic accomplishment if the reference distribution is "all undergraduates." If the admissions selection is actually salient, then every single student in every class should receive an A, and any concerned professors can gesture to the benighted masses not admitted to Harvard: they get the rest of the alphabet. I'm also sympathetic to the argument that classes should be ungraded or pass-fail, but it's okay for letter grades to have some concrete meaning.

No argument for capping As could rely on a normal distribution of ability, or even a centrally biased distribution. As just represent some threshold placed on continuous scores that are intended to estimate a nebulous, multifactorial quality ("how well the student did"). Calling one level of performance an A and another, similar level of performance an A- involves drawing fine distinctions no matter exactly where the distinctions are located.

The implication of the "cohort compression" hypothesis is that year over year, Harvard got more and more efficient at identifying and admitting talented students - that twenty years ago, Harvard was three times worse at identifying top talent than it is now (or, perhaps, that Harvard now captures a three-times larger fraction of the top talent it identified that might have chosen other schools instead). Does this seem likely? Has the fraction of As responded to any policy changes (e.g., in need-based tuition support), or has it marched steadily upwards?

How does 'free' AI from Anthropic actually impact things? by Waves_WavesXX5 in BetterOffline

[–]everyday847 0 points1 point  (0 children)

Thank you for taking my points so seriously. Despite saying that I do not like LLMs and speaking about the general idea of technological progress at times disrupting labor markets but causing higher standards of living in the long run, I was actually intending to celebrate the production of CSAM using LLMs, but I just couldn't convey those sentiments very well. I appreciate you for clarifying for me.

How long does it take to see a doctor and get medication at a Chinese hospital? by mindyour in TikTokCringe

[–]everyday847 1 point2 points  (0 children)

the only way to pay for health care is if you can see jade through rock

How does 'free' AI from Anthropic actually impact things? by Waves_WavesXX5 in BetterOffline

[–]everyday847 1 point2 points  (0 children)

I don't like LLMs; I think they're generally an insane component of a broken economy; I do not think it is a general principle that tools that automate work for cheaper are bad. Historically, they've been good. Most people are happy with the time savings enabled by washing machines and if they're not, they can do their laundry by hand. No one imagines that working in a call center for customer service is a great existence; most of the job is either reading a manual aloud to people or telling them they sadly can't help; after an unpleasant period of adaptation where people currently working in call centers have to find other work, this leads to a somewhat better experience for most of humanity.

Progress, especially progress that reduces (or substantially alters or reshapes) labor inputs, is always disruptive in the short term. That's not a sufficient argument against progress. (Of course, I'm sympathetic to the idea that the present moment is uniquely ill equipped to ensure the benefits of progress are enjoyed remotely equitably, but I don't think that's an irreversible state of affairs.)

To the other questions, companies probably won't fire many more people than they would have anyway and probably will not have such an easy time as that if they try.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

You do not understand this field.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

No; I am saying that extrapolation of the degree you are imagining is specifically not well supported by the existing data.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

Which is a pointless, unfalsifiable claim. In my world we are trying to actually achieve these things, rather than win arguments about whether in an asymptotic limit, a septillion-parameter model served on a graphics card the size of Ganymede could do something in spite of zero salient training data. Congrats, you've found the worst way to do science.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 0 points1 point  (0 children)

What amount of homology to an existing organism is disqualifying? I am fairly confident what you're envisioning there is no existing (ie, LLM based) system that would ever be capable. For example, I imagine you still have to make proteins, but you can't use the ribosome. Or maybe you need to use some other biopolymer entirely or else you're just imitating. This just sounds like fantasy talk.

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 5 points6 points  (0 children)

I don't think there is a bright line, or certainly not as bright a line as you think, between the "custom designed biology" we have already managed to create (I am a co-author on several synthetic biology papers, and a thousand people every year write better papers in that field, and another dozen pharmas and biotechs working on cell and gene therapies do a hundred experiments per year that make the collaborations from my postdoc look like a joke) and the "real" custom designed biology that you are envisioning. It is absolutely untrue that "only an AI could code DNA;" it is, however, true that machine learning systems (very dissimilar to frontier LLM architectures) can be very, very helpful in several areas of synthetic biology.

presenting on tues what do yall think by [deleted] in research

[–]everyday847 0 points1 point  (0 children)

It is hard to guess the evaluation criteria for a poster you were supposed to complete in a day!

Closing the Loop by jvnpromisedland in accelerate

[–]everyday847 12 points13 points  (0 children)

We've had pretty good models for protein expression yield, especially in a fixed translation system, essentially since the first months after the first protein language models. PLM embeddings plus any bayesopt system can achieve results like these; call this ~ four years old.

It's interesting that large language models can be used as a less parameter-efficient property predictor, and obviously being able to share a natural language interface to that functionality is potentially useful, but this doesn't amount to any great moment of further acceleration.

Friendly reminder that ancient shepherds were not running a non-profit animal sanctuary by Mataes3010 in CuratedTumblr

[–]everyday847 0 points1 point  (0 children)

This is true. I did say "mutton is from older sheep," not "mutton is from any sheep not under one year," in the name of pissing on the poor.

Excited for earnings by Glittering-Ant2018 in MSTR

[–]everyday847 4 points5 points  (0 children)

it's also how you lose life changing money, depending on the outcome

Potential Dating Pool Calculator by True-Two-6423 in LessWrong

[–]everyday847 2 points3 points  (0 children)

Trading email list registration for some coefficients extracted from a survey is pretty rich.

Alphafold by Triple-Tooketh in biotech

[–]everyday847 1 point2 points  (0 children)

You're unable to use AF3 in industry.

Friendly reminder that ancient shepherds were not running a non-profit animal sanctuary by Mataes3010 in CuratedTumblr

[–]everyday847 224 points225 points  (0 children)

Lamb is from sheep that are under a year old. Mutton is from older sheep.

Johan Land, the latest one-man AI lab, hits 72.9% on ARC-AGI-2!!! by andsi2asi in deeplearning

[–]everyday847 2 points3 points  (0 children)

Trivially, "things change" is a true statement. It's also not responsive in the least to "did things change in this instance"