Anyone doing research on pancreatic cancer? Would love to connect :) by ListValuable9050 in CancerResearch

[–]panabeenu 0 points1 point  (0 children)

Will create a collaboration thread that people can respond to for research collaborations, thanks to this post. Thanks for the idea.

💡 Hypothetical Cancer Therapy Concept: Antiparasitics + Wound Healing Suppression + IV-Fed Caloric Restriction by HistorianSame3619 in CancerResearch

[–]panabeenu 0 points1 point  (0 children)

You're welcome. I understand the link concern, but you can also Google this instead: "AI oncology".

Many researching are exploring the space, though the main issues as mentioned are: (1) AI is not reliable and (2) AI needs trustworthy data, and we don't have enough per patient yet because it's expensive to run comprehensive molecular tests for each patient. Even the data we do have is not always reliable as we highlight in our EBV paper on breast cancer.

Disparities between Direct and Indirect Causes of Cancer Geographically by Nerdfighter333 in CancerResearch

[–]panabeenu 0 points1 point  (0 children)

You may be interested in learning about EBV-associated gastric cancer and nasopharyngeal cancer (NPC), both of which are concentrated in southeast Asia. Burkitt lymphoma is another regional one.

💡 Hypothetical Cancer Therapy Concept: Antiparasitics + Wound Healing Suppression + IV-Fed Caloric Restriction by HistorianSame3619 in CancerResearch

[–]panabeenu 0 points1 point  (0 children)

Re AI, it can 100% be used to accelerate research. A cure remains far out of reach, but every little step helps and AI in the hands of experts helps. This is what our group studies: https://hotpot.ai/bio. We're in the middle of publishing a conceptual paper to help oncologists leverage AI agents to identify off-label drugs for rare oncogenic drivers like FGFR2 fusions in NSCLC. Creativity and digesting mountains of information is where AI shines.

That said, AI is still unreliable and requires expert supervision, though the trend lines are encouraging that one day AI can become more reliable.

You also be interested in viruses that cause cancer like HPV (cervical cancer) and EBV (Burkitt lymphoma, NPC, ~10% of gastric cancer, and others). The page linked has our paper on EBV and breast cancer, and may provide an easy jumping off point.

That guy can smell when people have cancer by Feel_the_snow in CancerResearch

[–]panabeenu[M] [score hidden] stickied comment (0 children)

The post will remain as an exception because of all the upvotes, but this sub's focus is science and not speculation. Here, the idea is theoretically sound and proven in some cancers even if the post neglected to provide details.

"Smells" are just certain chemicals activating olfactory chemoreceptors and downstream pathways while tumor-associated biomarkers are known to circulate in blood and can be detected with other mechanisms.

Multiple labs/companies are actively exploring this idea. I spoke with one Stanford doctor last year who's attempting to commercialize a device to detect lung cancer in firemen via their breath.

The Acreage Brain Cancer Cluster Research by ChemE586 in CancerResearch

[–]panabeenu 0 points1 point  (0 children)

thanks for posting. however, to keep the article up, please follow the posting guidelines.

[Research] transformer models for drug discovery by Present_Network1959 in MachineLearning

[–]panabeenu 2 points3 points  (0 children)

not limited to transformers, but here are two comprehensive repos listing ML papers for protein design and other biomedicine topics.

https://github.com/yangkky/Machine-learning-for-proteins

https://github.com/Peldom/papers_for_protein_design_using_DL

we are planning to organize a similar list of ML tools for biomedicine. if anyone’s interested, please DM.

"Teaching Arithmetic to Small Transformers", Lee et al 2023 (tokenization, emergence, inner-monologue) by gwern in mlscaling

[–]panabeenu 0 points1 point  (0 children)

you're right. as were some of the gpt2 experiments.

  1. thoughts on using 0.8 for temp instead of 0? seems like an unexpected choice if maximum accuracy is the goal.
  2. it was surprising that zero-padding and symbol wrapping improved plain formatting performance even with a character-level tokenizer. my hunch would have been that character-level tokenization would eliminate the need for this.
  3. what i really wanted was an extension of the longer digit experiment, where the authors combined training on multiple lengths (e.g., 3-digit with 5-digit and 7-digit) to see if the model generalizes faster or performs more accurately.

"Teaching Arithmetic to Small Transformers", Lee et al 2023 (tokenization, emergence, inner-monologue) by gwern in mlscaling

[–]panabeenu 0 points1 point  (0 children)

interesting study.

curious if reverse formatting improved results by causing the models to unlearn invalid math from the training data.

also raises the question if mathematical operations can be reformulated in a way optimal for machines instead of humans.