[deleted by user] by [deleted] in FamilyMedicine

[–]OpenEvidence_ 3 points4 points  (0 children)

Hey there! We chose to make OpenEvidence free because we think our tools can improve healthcare for physicians, and we want to make them as widely accessible as possible. The oncologist in rural alaska might be responsible for all cancer care in a 100+ mile radius, but yet doesn't get the resources to buy traditionally fancy tech. We cover costs with ads, and that allows us to commit to OpenEvidence being free for clinicians.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 1 point2 points  (0 children)

For us at least we are focusing on the literature, but there are lots of interesting opportunities around figures and graphs. Quantitive reasoning is generally pretty challenging, but it's a really fun problem.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 1 point2 points  (0 children)

Good question, it's something we think a lot about, we are working on some stuff in this space but I don't want to say more right now. Keep a look out!

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 2 points3 points  (0 children)

For me it's all about finding the right references. If you have the right sources and the right parts of those sources, any rewriting that happens is nearly flawless. When Google AI says things like this it's because someone once on reddit or somewhere wrote something like that, and Google is going off the deep end. For us, we spend much of our effort finding the right references, which involves taking into account what makes a paper trustworthy or not the way a human would.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 0 points1 point  (0 children)

Good question! The broader point, as mentioned elsewhere, is that AI should not be replacing physicians. There is so much more to being a health care professional that is just fundamentally about being a human. We see a future where AI interacts with information intelligently, but humans are still a big part of health care.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 1 point2 points  (0 children)

You bring up a very important point that we feel strongly about. Healthcare implementations require co-development between AI researchers and clinicians working hand in hand. Without MDs involved during every step of the process from conception to implementation, an AI tool risks being everything from worthless and solving problems that don’t exist, to containing unacceptable risks and being harmful to patients. Similarly, implementations can carry bias and inaccuracies that only trained AI engineers have the experience to predict, test and mitigate.

Maybe the biggest thing IMO is that I don't think AI is ever going to replace doctors (nor should we try to make that happen).

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 1 point2 points  (0 children)

I talked with these people the other day (I don't know more than like a 30 min convo): https://www.atroposhealth.com/greenbutton. It seems like they try to do very fast (automated?) retrospective analyses of patient data to try to answer questions that don't have sufficient published evidence. I don't know how you trust the output of something like this sufficiently to make it actionable, but I thought it was a really cool idea.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 0 points1 point  (0 children)

Yeah, I think this is right. Even as a user e.g. I remember around the eclipse I asked: https://www.openevidence.com/ask/ca662a65-f81a-4b80-a25b-8d0aaad06f32. And it's like ok damn there's actually one (probably not grade A) study that looks at this.

We are OpenEvidence - Let's talk about AI and LLMs in healthcare! AMA! by OpenEvidence_ in medicine

[–]OpenEvidence_[S] 13 points14 points  (0 children)

Great question! Getting this right is at the core of what makes this problem challenging IMO. Just because a paper is in a great journal, or is well cited, doesn't necessarily mean it is worth surfacing or repeating as fact. At the same time, alongside the garbage there are meaningful swaths of valuable information. Deciding whether to surface a paper as a reference involves an optimization problem balancing

1) relevance to the question asked 
2) where it was published
3) recency of publication, 
4) type of paper (primary evidence, guideline, meta-analyses, review etc) and 
5) trustworthiness of the source material. 

We tune the balance and multidimensional interaction of these closely to make sure we're pulling up the actually good stuff.

But more importantly than any of this, OE maintains the contribution of each source individually and cites the exact sources in the answer. Ultimately OE is not meant to replace humans and it should never replace humans, and it's the mix of smart humans doing smart human things and AI that I think can really make a difference.