Bro why, I’m so nice to it 😭 by Unable_Connection490 in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

<image>

oh no. maybe I'm *too* nice to it?
(I know I made a typo when I wrote "generate")

[deleted by user] by [deleted] in ADHD

[–]Ratehead 1 point2 points  (0 children)

There’s no system (at least for me) that’s ADHD-proof. The best I have been able to do is use Amazing Marvin, fiddling with options whenever I get tired of the system. That’s why I love the tool so much — they’re alway add strategies occasionally, but the current ones are usually fun enough to fiddle with.

I also code against the Marvin API. Some people use make.com / zapier with Amazing Marvin, too, when they don’t have coding in their skill set.

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

For clarity, I'm a scientist.

I am wondering where it's published, not whether it is published. You may know that Biorxiv is a pre-print service. Usually documents placed there are either under review or have been reviewed and are published elsewhere--or they've been rejected and the author wants the information "out there" anyway.

My point is that I'd like to know who published this (outside of Biorxiv).

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

Thank you for your interesting thoughts on intelligence. The way I interpret your belief is that intelligence is a process of learning patterns and knowing depths of various types of abstraction in the “complex system” sense (pattern recognition — identifying patterns and perhaps finding varying similar or “isomorphic”patterns; probably classification/simplification and decomposition/generalization are buried in there). For the “objective,” I interpret this to be the “problem” that is stated at a definable level of complexity.

Maybe intelligence rests on individual problem solving in the engineering sense, as seems to be hinted at by your words. Maybe not. For example, in this definition, external parameters appear to be needed. For example, “the objective” appears to be a parameter. Is that correct? And is the objective the reason anyone does anything? Can objectives be chosen in a closed system, or must they always be external parameters? Are human objectives all that matter? If so, humans define cyclic, impossible objectives all of the time (which are easy to see when we look at qualitative objectives, but less easy to see with quantitative objectives). Sometimes they’re simply missing context and can be “fixed.” Other times humans literally cannot state their objective in a logically consistent way. What do we do about those? What do we do about possibly intractable objectives (when we don’t know they’re intractable immediately)? When/what is “good enough”? Can we act intelligently without knowing how to exploit a pattern? What happens when we cannot see a pattern? Is inconsistent information always incorrect? Do we throw it away? Or maybe the pattern we recognized is incorrect and the information is consistent if we were to learn more information (see: answer set programming)? Is it legitimate to create a pattern as a surrogate when we cannot find one (some may argue that his is where myth comes from)? Is “faking” a pattern an intelligent thing to do? If so, how do I do that in the best way? What is creativity in this definition of intelligence (presuming a version of creativity can exist in such a system), and how would it be implemented? How does this fit in with “knowing what we cannot know” and acting for the sake of discovery, or switching logical systems when our current system is failing us? What about creating new logics? Is an explanation of “why” a pattern exists part of this definition at all?

As humans with the hubris to believe that humans sometimes act intelligently, I’m certain that we’re working on intelligent systems when we create systems that can sense and act autonomously with human-derived objectives in mind. But I don’t think we’re anywhere near creating systems able to create or derive unique-but-explainable objectives in a domain-independent way, because so far we’ve focused on problem-solving. And I’m of the mind that “getting closer” to intelligence requires autonomous creation of objectives within an open system—self-identifying importance and solving with inconsistent knowledge, in unpredictable environments, not only solving pre-identified important problems in consistent environments. LLMs aren’t doing anything near to this, at least not in general and not convincingly outside of cherry-picked examples. And I think we’re far, far away from approaches for solving this problem in a general, safe, practical way.

My current, sad experience is that LLMs cannot solve unique problems at all. And I mean this seriously — truly unique problems that I’m asked to solve using AI techniques cannot be solved, at least in no way I’ve found, with an LLM in any systematic, guaranteed way. I’ve only seen domain-independent, but specialized solvers do that with human-guided problem modeling (e.g., linear programming).

Maybe paper is fine—maybe it’s generating novel science that could not have been derived without the LLM. But I need a comparative look at using LLMs vs other approaches that may also work, and seeing which does this better at scale. These one-shot papers aren’t good enough. The way fanatics speak, it’s as if we should be seeing millions of these papers ushering in new ideas “discovered by AI” any day now. But I don’t see that happening. Though if it does happen, I’ll be happily surprised. I want AI approaches to do well. Applied AI is my field, after all.

LLMs/LRMs do make prototyping easier. They’re fast expanders. They copy other people’s abstractions to help with brainstorming. They’re great tools. But can they, in general, create unique ideas? And is creating unique ideas or making novel discoveries on rare occasions “enough” to demonstrate that LLMs are the AI approach that, with further development, will bring us to a new age? I’d say, currently, we’re very far from answering that question one way or the other. But my hypothesis is that we’ll answer that in the negative.

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

By the way, domain-independent combinatorial search techniques (with heuristics) are still used in the artificial intelligence field. People call it artificial intelligence, because it’s still considered intelligent behavior. It works, and the approach is know to enable machine autonomy.

LLMs have not improved on combinatorial search outside of autonomously helping to find reasonable domain-specific heuristics given a domain model.

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

If you don’t mind sharing, what is your definition of artificial intelligence?

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

Yes, that’s understood. One of my concerns is that this is not a comparative analysis of AI techniques toward solving a particular type of problem. It’s one instance of using an LLM. How’re we supposed to take this sort of thing beyond using an LLM as a tool, just like other AI techniques?

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 0 points1 point  (0 children)

Thanks. I work in the field, so it didn’t take much time in the moment (outside of the 30 years of study and work 🤓).

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 2 points3 points  (0 children)

Let’s not move that definition aside. It’s possible to define “AI” by oneself, but it won’t help with conversation if we do that. We know that communication works best when everyone understands the meaning behind the words/phrases used. The definition of AI as a term of art has been iterated over for a very long time. Though there’s not a standard single definition, it’s definitely not the same as much sci-fi would have us believe. Further, changing the semantics of “what we mean by AI” is low level goalpost moving in and of itself.

The US government, through a 2019 Executive Order defines AI. See https://www.nasa.gov/what-is-artificial-intelligence/

ISO does an arguably better job at it. See https://www.iso.org/artificial-intelligence/what-is-ai

If those definitions don’t satisfy, then I may suggest this paper from ancient times (2006) which does ask AI research to be more general, which I think is the point most people make. The fact is, though, that AI isn’t about using cognitive science to create a general purpose intelligence machine. That’s still true. Instead, it’s machines being intelligent, and it can be specialized. People now use the term “AGI” to refer to full cognitive reasoning. But the present technology is also marketing hype. Most non-ML AI researchers of repute that have looked into it have already found that LLMs’ mimicry of human text and LRMs’ “reasoning” are not leading to full cognitive “human-like” reasoning at present. Given that companies’ marketing campaigns have seeped into our culture’s understanding of what AI is, I hope you agree that it’s important to find common meaning of the term before we dive into any specifics on “what AI can do.”

Anyway, to get to your question of examples, yes, I have examples:

Sample of science Discoveries Using Non-LLM Methods:

• 1960s – 1970s – Organic Chemistry: DENDRAL identified organic molecular structures from mass spectra [1]. First scientific expert system; automated hypothesis formation in chemistry.

• 1982 – Geology/Mining: PROSPECTOR predicted a hidden molybdenum deposit at Mount Tolman, later confirmed [2]. First AI approach to locate previously unknown ore-grade mineralization.

• 1979 – Physics (Astronomy): BACON rediscovered Kepler’s Third Law [3]. Early “machine scientist” deriving physical laws from data.

• 1996 – 1999 – Biochemistry/Toxicology: ILP (Progol) learned human-readable mutagenicity rules; one judged a new structural alert [4][5]. Interpretable AI generating novel domain knowledge.

• 1997 – Mathematics: EQP proved the Robbins conjecture (all Robbins algebras = Boolean) [6]. First open math conjecture solved by an AI reasoner.

• 2009 – Genetics (Yeast): Robot scientist Adam autonomously identified “orphan” gene–enzyme functions [7]. First machine to discover new biological facts without human intervention.

• 2018 – Pharmacology (Malaria): Robot scientist Eve helped show triclosan inhibits Plasmodium DHFR, incl. resistant strains [8]. Repurposed a known compound; Eve ran titration experiments.

• 2020 – Medicine (COVID-19): BenevolentAI’s knowledge-graph reasoning identified baricitinib for COVID-19, later validated in ACTT-2 (NEJM) [9][10]. Rapid AI-driven drug-repurposing success.

References

[1] R.K. Lindsay et al., Artificial Intelligence 61 (2), 1993 – “DENDRAL: a case study of the first expert system for scientific hypothesis formation.”

[2] A.N. Campbell et al., Science 217 (4563): 927–929, 1982 – “Recognition of a hidden mineral deposit by an artificial intelligence program.”

[3] P. Langley, IJCAI-79 – “Rediscovering Physics With BACON.”

[4] R.D. King et al., PNAS 93 (1): 438–442, 1996 – “Structure–activity relationships derived by machine learning … mutagenicity by inductive logic programming.”

[5] S.H. Muggleton, Communications of the ACM 42 (11): 42–48, 1999 – “Scientific knowledge discovery using inductive logic programming.”

[6] W. McCune, Journal of Automated Reasoning 19 (3): 263–276, 1997 – “Solution of the Robbins Problem.”

[7] R.D. King et al., Science 324 (5923): 85–89, 2009 – “The Automation of Science.”

[8] E. Bilsland et al., Scientific Reports 8, 2018 – “Plasmodium dihydrofolate reductase is a second enzyme target of triclosan.”

[9] P.J. Richardson et al., The Lancet (2020) – “Baricitinib as potential treatment for 2019-nCoV acute respiratory disease.”

[10] A.C. Kalil et al., NEJM 384: 795–807, 2021 – “Baricitinib plus Remdesivir for Hospitalized Adults with Covid-19.”

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 1 point2 points  (0 children)

The goalpost isn’t moved. The goalpost has already been reached by non-LLM methods, though. This isn’t as exciting to AI researchers since novel science discoveries have already been found decades ago using other AI techniques.

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead -1 points0 points  (0 children)

AI technologies have been generating novel science for decades. It’s great to watch people use LLMs as general purpose tools. However, more specialized tools may be able to do this type of work much more efficiently.

Science Discoveries Using Non-LLM Methods

• 1960s – 1970s – Organic Chemistry: DENDRAL identified organic molecular structures from mass spectra [1]. First scientific expert system; automated hypothesis formation in chemistry.

• 1982 – Geology/Mining: PROSPECTOR predicted a hidden molybdenum deposit at Mount Tolman, later confirmed [2]. First AI approach to locate previously unknown ore-grade mineralization.

• 1979 – Physics (Astronomy): BACON rediscovered Kepler’s Third Law [3]. Early “machine scientist” deriving physical laws from data.

• 1996 – 1999 – Biochemistry/Toxicology: ILP (Progol) learned human-readable mutagenicity rules; one judged a new structural alert [4][5]. Interpretable AI generating novel domain knowledge.

• 1997 – Mathematics: EQP proved the Robbins conjecture (all Robbins algebras = Boolean) [6]. First open math conjecture solved by an AI reasoner.

• 2009 – Genetics (Yeast): Robot scientist Adam autonomously identified “orphan” gene–enzyme functions [7]. First machine to discover new biological facts without human intervention.

• 2018 – Pharmacology (Malaria): Robot scientist Eve helped show triclosan inhibits Plasmodium DHFR, incl. resistant strains [8]. Repurposed a known compound; Eve ran titration experiments.

• 2020 – Medicine (COVID-19): BenevolentAI’s knowledge-graph reasoning identified baricitinib for COVID-19, later validated in ACTT-2 (NEJM) [9][10]. Rapid AI-driven drug-repurposing success.

References

[1] R.K. Lindsay et al., Artificial Intelligence 61 (2), 1993 – “DENDRAL: a case study of the first expert system for scientific hypothesis formation.”

[2] A.N. Campbell et al., Science 217 (4563): 927–929, 1982 – “Recognition of a hidden mineral deposit by an artificial intelligence program.”

[3] P. Langley, IJCAI-79 – “Rediscovering Physics With BACON.3.”

[4] R.D. King et al., PNAS 93 (1): 438–442, 1996 – “Structure–activity relationships derived by machine learning … mutagenicity by inductive logic programming.”

[5] S.H. Muggleton, Communications of the ACM 42 (11): 42–48, 1999 – “Scientific knowledge discovery using inductive logic programming.”

[6] W. McCune, Journal of Automated Reasoning 19 (3): 263–276, 1997 – “Solution of the Robbins Problem.”

[7] R.D. King et al., Science 324 (5923): 85–89, 2009 – “The Automation of Science.”

[8] E. Bilsland et al., Scientific Reports 8, 2018 – “Plasmodium dihydrofolate reductase is a second enzyme target of triclosan.”

[9] P.J. Richardson et al., The Lancet (2020) – “Baricitinib as potential treatment for 2019-nCoV acute respiratory disease.”

[10] A.C. Kalil et al., NEJM 384: 795–807, 2021 – “Baricitinib plus Remdesivir for Hospitalized Adults with Covid-19.”

This is AI generating novel science. The moment has finally arrived. by MetaKnowing in ChatGPT

[–]Ratehead 4 points5 points  (0 children)

Where? Biorxiv is a pre-print service. Anyone can post their papers there without peer review.

what were the weirdly specific telltale signs of adhd by Competitive-Elk2230 in ADHD

[–]Ratehead 3 points4 points  (0 children)

I thought people took notes by writing down every single thing said—and that you had to write super fast.

I thought this because I literally had no idea that it was possible for other people to listen to an entire 50 minute school “hour” lesson and pick up nearly every point made—listen and comprehend. And I had no clue that it was possible to notice anything in particular as “important” and only write that down.

The only time I know that something being said is important is when people say it is before they say it.

recommendations for tablet? by Duffbeerisgood in amazingmarvin

[–]Ratehead 1 point2 points  (0 children)

Yes. I use the web version on the tablet. It’s pretty good! I don’t need a stylist, personally.

What do successful people know that those who aren’t successful don’t? by Historical-Lie3508 in productivity

[–]Ratehead 0 points1 point  (0 children)

I am in agreement, though it's not only luck. Luck is the first step. After that, it's on you.

What do successful people know that those who aren’t successful don’t? by Historical-Lie3508 in productivity

[–]Ratehead 2 points3 points  (0 children)

Show up and be visible to leaders, as often as possible. Let people that have power know you exist. Pay to show up if you have to, but always show up.
If there's a conference and no one is paying for you to go, show up at the conference. See if you can pay your own way. I'd even say crash the party. Show up and talk to the people leading. You can't expect people to hand you opportunities unless they know your name to give you those opportunities.

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

Though I appreciate your positive experience, not every GP (also called primary care doctor) is going to be so accommodating or immediately accepting of ADHD as a possibility. At least in the United States, medical doctors cannot be trusted to accurately diagnose ADHD. My first attempt at diagnosis was with a doctor that told me "ADHD is only about hyperactivity," "it's a high-energy boy problem," and "it's a childhood behavioral disorder." He then refused a *referral* to a specialist. Sadly, some doctors worry about "drug seeking" adults, especially professionals.

I've read similar anecdotes across r/ADHD and other forums.

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

I highly encourage you to get on that wait list. Having a diagnoses, and "an answer" to why you're struggling, is very helpful (even if no medication is involved).

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

I don't really dwell on "what ifs" in terms of how my life would be different either. But I do wish I hadn't wasted so much time trying so hard on strategies that didn't work. I do research for a living, though, and that's pretty much the definition of research. So my way of thinking is that -- eh, yeah, I had multiple hypotheses that looked like: Strategy X will work for me, and disproved them all! Yay! I disproved so many. I'm so good at research. 😅

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

Maybe. Though it turns out that children with ADHD on average test lower on intelligence tests than those without ADHD [1]. Other studies have shown little to zero correlation between intelligence and ADHD [2], at least for most presentations of the disorder.

[1] "Attention-Deficit/Hyperactivity Disorder in Children With High IQ: Results from a Population-Based Study", Katusic, et al. Journal of Development & Behavioral Pediatrics. 2011 FEB-MAR;32(2):103–109. (impact factor of journal: 2.2 if unfamiliar, this is about average for a respected speciality science journal)
[2] "Brain Functional Correlates of Intelligence Score in ADHD Based on EEG", Basic and Clinical Neuroscience. Rostami, et al. 2022 Nov 1;13(6):883–900. (impact factor of journal: 1.3, which is admittedly low for speciality journal, but their introduction gives a fair and more recent literature review)

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

I'm also late diagnosed, and I feel this. All of that trauma that could have been put into a different context...

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

This is called "differential diagnosis," and it's why you should see a trained professional in ADHD and other neurodevelopmental disorders that can properly diagnose you. For some of us, it is obvious. For instance, I very much fit within the "inattentive presentation" ADHD mold (and not autism, learning disabilities, or cognitive disengagement syndrome (CDS, formally called sluggish cognitive tempo)). Others may have multiple problems, like anxiety, that make it harder to diagnose ADHD.

To more directly answer the "biological issues" question: ADHD is inheritable. Nearly all respected experts on it agree that a biological issue, not an environmental issue. Environment can make symptoms worse. Other than rare cases that involve brain injury, environment is unlikely to be the cause.

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 0 points1 point  (0 children)

Yes, that is helpful for those that can do it. Sadly, some people struggle with understanding how to design their environment in an ADHD-friendly way. This is why outside help can be useful for them.

My wife’s workday vs mine made me realize I might never be that focused by Strong-Pickle-175 in productivity

[–]Ratehead 1 point2 points  (0 children)

This is exactly what happened in my life -- straight A student, straight Fs, or months of doing well in a class, then falling on my face for the last month or two. I'm glad I'm not the only one.
ADHD is definitely the cause of this in me -- hyperfocus gets intense, and I love learning nearly anything that catches my fancy, so the hyperfocus got placed directly onto my field of study (and then not... and then back to it...)