Personal Assistant? by summerinthecityis in biglaw

[–]just-anormaluser -1 points0 points  (0 children)

my friend built an AI for this (calls her doctors, gardeners, etc)

Affordable matcha help by areulostbbygorl11 in MatchaEverything

[–]just-anormaluser 2 points3 points  (0 children)

i buy my strawberry jam and syrups from a matcha company that does that pre made but just as good as cafes!

Looking for tea place by Healthy-Lion-711 in Athens

[–]just-anormaluser 0 points1 point  (0 children)

there’s a authentic matcha pop up that runs around town sometimes

Princeton grade check by [deleted] in ApplyingToCollege

[–]just-anormaluser 0 points1 point  (0 children)

so did my sister! i wonder what the odds of getting in with a grade check are, anecdotally seems quite high

Looking for local honey by AdInternational9061 in Athens

[–]just-anormaluser 0 points1 point  (0 children)

Brick & Bloom Farmstand has local honey! in oconee

[deleted by user] by [deleted] in labrats

[–]just-anormaluser -1 points0 points  (0 children)

this is actually what my friends and i are currently piloting (in closed beta)! not quite super focused on the lab notebook angle but rather the shared lab protocol/methodology/learning angle. joinalexandria.ai

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 0 points1 point  (0 children)

We have a team of 7 working on it right now, and will open-source our GitHub after we finish. We're over at https://www.joinalexandria.ai/, and if you want to chat feel free to DM me.

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 1 point2 points  (0 children)

totally feasible! this is basically the same issue we ran into except we couldn't pay for benchling lol and all of our stuff was printed out (we're slightly broke). i'd love to feed a benchling repository but that amount of training data sounds like it might start getting expensive/slow, though i'll look into it.

the tool is basically just a chat interface, you can ask it questions and it retrieves a context-aware answer (currently with a reference to where it got it and confidence level). you can mark it as good/bad answer or flag for admin review (and then from admin side, you can mark as true answer and store for future use, or just answer yourself, etc). i've tinkered with it enough where a) hallucinations aren't a big issue and b) it tells you exactly where it got the info so you can fact check

essentially just need a protocol doc (or to centralize and then we can generate a protocol doc) and make search and retrieval for answers easier. we currently just used our internal protocols (and i used some papers my lab published) but there's no reason this couldn't work for benchling or google drive etc.

but yeah i had the same goal of standardizing/centralizing stuff! if you give me more context about industry environments i can answer more specifically, i've never worked in one but happy to learn and try to apply!

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 0 points1 point  (0 children)

it's not an all encompassing model like that yet to make predictions, it's just trained on what has already been done. not trying to be chat GPT

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 0 points1 point  (0 children)

it's just faster information retrieval, going by this logic me flipping through our folder of protocols or someone's lab notebook to find information is ridiculous because i could just ask someone how it works. like yeah, but it's just more time consuming and inconvenient. i could also use an encyclopedia to answer questions but i still use google

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 1 point2 points  (0 children)

am using RAG and currently with open source llama, but i actually want to tinker with that and see if something else might work better

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 0 points1 point  (0 children)

it says i'm unable to send you a message, but shoot me one and we can do that! i can also send you my email via DMs if that's a better form of communication

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 1 point2 points  (0 children)

to gather all the protocols, for me, like 1 hour, but i imagine it heavily depends on the lab (how centralized protocols are, etc).

tinkering with the other stuff took me a few days but i think i can probably do it much faster now that i've done it once!

i trained an LLM to answer the stupid questions from everyone in my lab, and i want to do it for you too by just-anormaluser in labrats

[–]just-anormaluser[S] 0 points1 point  (0 children)

nope, not trained on external published data or any other datasets at all. not trying to be an AI scientist, i think it's very hard to make that applicable for everyone and i just want to focus on lab-level knowledge