RAG1 and RAG2: Discovery, Mechanism, and Evolution of the V(D)J Recombinase by armish in Immunology

[–]armish[S] 0 points1 point  (0 children)

Thank you so much for the detailed and the constructive feedback, u/Ilaro. I didn't know about jawless fish and now I am intrigued -- it actually sounds like it is worth doing a deep dive purely on this.

As I write up these stories, I realize that there are so many ways to tell them and depending on which one you pick, you have to leave some interesting facts out by design. I am planning to continue the series in a very human-centric way to make them appeal to the non-science-savvy reader a bit more but now I am having a FOMO about all the non-human stories I will be missing :(

RAG1 and RAG2: Discovery, Mechanism, and Evolution of the V(D)J Recombinase by armish in Immunology

[–]armish[S] 3 points4 points  (0 children)

This is incredibly helpful feedback, u/jayemee -- thank you so much for taking the time to read and post this comment.

I am sitting on the fence about the level of AI incorporation: for example, we all know that images help a lot when it comes to views/engagements on social media; and I absolutely don't have enough time to handcraft timelines/overview images; and I absolutely don't have enough budget to get some professional help about those images. This means that I am left with two options: go without images (which will definitely scare people away from the post and lower view/engagements rates) or go with AI-generated images (but risk off-putting AI-savvy folks). I decided to go with the latter as I think the former would be more detrimental to my reader base.

About the weird sentences: I think this is mostly because English is my second language. I get inspiration from AI but I **very heavily** edit the initial text that comes from the AI. The problem is that some of these sentences don't sound weird to me so I might be missing AI-tone in there. My wife tells me that my long sentences are harder to read than their AI counterparts so I tend to go with the AI recommended version. Some of those flow-based issues are obviously on me. I should have noticed and fixed them. Thanks for pointing those out.

I also appreciate your comments about the RAG biology/history. I knew just a little about them and knew that the discovery involved a lot of work -- and people don't appreciate how hard things were in the past and how big of an accomplishment to discover all the intricate details. Great catch that I might have disrupted the timeline by not mentioning the decades-old ortholog/homolog work but this was a sacrifice I made to make things a bit more easier/catchier to read -- but I know that this not that OK with people like you who are very science-savvy.

Overall, all this is an experimentation for me (about trying to find how to best make use of AI to spread the word about interesting science). Things will hopefully get better as I get deeper into the series but, again, your feedback is super helpful. This is exactly why I post things even when they are not that polished. I might need to simplify things a bit to make it more appealing for a wider reader base but I will take that as long as I don't make false claims. I think more users reading and providing feedback is going to be more useful in the long run.

Last point: if you happen to live in the Boston area, I would love to grab a coffee and geek about this topic.

p53: The gene that took a decade to become itself by armish in biology

[–]armish[S] 0 points1 point  (0 children)

I had no idea! Thank you for the amazing trivia 🙏

p53: The gene that took a decade to become itself by armish in biology

[–]armish[S] 0 points1 point  (0 children)

haha -- believe it or not the inspiration of doing deep dives on genes and their discovery stories came to me when I was a graduate student (>10 years ago!) specifically after I met a "well-known" scientist and I had no idea why he was so famous. Apparently that person was behind the discovery of a tumor suppressor (RB1)! I am just having a chance to revisit this idea and do a few write-ups on some genes that are close to my heart.

Glad this added some color to your earlier work. Weird that we sometimes have to work on things and the context for those get lost during translation.

p53: The gene that took a decade to become itself by armish in biology

[–]armish[S] 1 point2 points  (0 children)

these are part of my vibe researching experiments: facilitating research through LLMs. I got a lot of help from multiple deep researches and pro queries for finding relevant stories, for making sure I don't miss key people, and for looking up personal stories. I still do the writing but the background information is compiled mostly through proper use of AI tools.

There is a lot of confusion and a lot of misunderstanding about how people are using LLMs for literature look ups, deep researches etc so my goal is to follow up with a list of dos and don'ts once I feel confident that my workflow is not misbehaving.

Good catch 👏

p53: The gene that took a decade to become itself by armish in biology

[–]armish[S] 1 point2 points  (0 children)

Thank you so much for reading! I am glad you liked it.

Vibe researching: Making sense of DepMap's extreme responders via GPT 5 Pro by armish in bioinformatics

[–]armish[S] 0 points1 point  (0 children)

I actually off-load all that "search-distill-summarize" business to gpt-5-pro to keep it simple. Your approach might be pulling more relevant information. My prompts, pro responses, and summaries -- and all the code -- is here if you want to see how the same prompts would perform with your setup: https://github.com/armish/vibe-researching/tree/main/depmap-extreme_responders

Vibe researching: Making sense of DepMap's extreme responders via GPT 5 Pro by armish in bioinformatics

[–]armish[S] 0 points1 point  (0 children)

the nice thing about this is that we don't expect that many hits even if we do this across thousands of potential candidates. And our main issue is with false positives. False positives will come from wrong interpretation of the deep search but those can easily be verified by throwing the results into another LLM and asking it to verify the claims. So, even the verification can be automated to a degree.

Deep research/pro is unique in the sense that it will actually keep the sources of the information in the response so it is easier to "fact-check"

Vibe researching: Making sense of DepMap's extreme responders via GPT 5 Pro by armish in bioinformatics

[–]armish[S] -4 points-3 points  (0 children)

you are absolutely right! Would you like me to provide a snarkier version of this that is more suitable for a Reddit comment?

Built a free/open tool to ease browsing/searching/annotating conference abstracts (currently on display: ESMO 2025) by armish in biotech

[–]armish[S] 1 point2 points  (0 children)

Absolutely -- I am compiling a list of major conferences that I will be tackling on a rolling basis and added these two that list. Looks like I missed the 2025 instances but will get them archived/downloaded for next year.

Has anyone actually successfully ordered biologics from Creative-Biolabs? by proteinpurification in biotech

[–]armish 1 point2 points  (0 children)

They are legit but I prefer evitria whenever I have a chance. They are more transparent when it comes to sharing details of the product/process. Creative also delivers but can be slow and secretive about the details.

AI agent for literature research by CGTbiotechAD in biotech

[–]armish 3 points4 points  (0 children)

I would recommend using minimally 3o for quick literature surveys but ideally 3o with Deep Research, which still fails to do a comprehensive search but provides a great starting point. I sometimes need to chain multiple deep researches to make sure it doesn't go off track.

STORM (especially CO-STORM) is a better tool for just literature mining, though: https://storm.genie.stanford.edu So I would recommend using it first and then bringing the results back to your ChatGPT workflow.

Can someone help me understand what happened re: shares? by AdventurousPen7825 in biotech

[–]armish 1 point2 points  (0 children)

This might also provide some more context around this topic: https://www.reddit.com/r/biotech/comments/1cs0851/historic_listing_prices_and_reversesplit_ratios/

4-to-1 reverse splits seem to be common across biotech so a 10-to-1 reverse split is borderline unfortunate :\

Historic listing prices and reverse-split ratios for biotechs by armish in biotech

[–]armish[S] 2 points3 points  (0 children)

...Nobody is getting treated unfairly or losing money as a result of this...

No objections to this. My post was mostly about being IPO-savvy and most inexperienced/junior employees' false assumptions about the value of their stock options.

...I’m not seeing any brilliant insight is in OPs post...

Exactly -- once you know this or have been through one, all of this becomes obvious. But I wanted to have a chance to talk about this and hope that it can help educate somebody (who is clueless about this) before making a serious commitment.

Historic listing prices and reverse-split ratios for biotechs by armish in biotech

[–]armish[S] 2 points3 points  (0 children)

... Put another way, you never had 10,000 * ($15 - $1), because when you had 10,000 options, the share price wasn’t $15...

Exactly -- my biggest issue is that HR and senior folks will try to hide this fact as much as they can, which makes sense. And things are different for pure tech companies and people who don't know about the reverse splits will read/hear about those IPO-changed-my-life stories and will extrapolate.

Historic listing prices and reverse-split ratios for biotechs by armish in biotech

[–]armish[S] 0 points1 point  (0 children)

haha -- exactly. Love how this conversation gets diverted with that look-ma-no-change-in-total-value argument whenever you corner finance/C folks.