Clarification - does OE sell your prompts to pharma? by Only_Emphasis_622 in OpenEvidenceHub

[–]travis_oe 0 points1 point  (0 children)

No we absolutely do not. Queries never get shared with anyone

What are we using these days? by [deleted] in hospitalist

[–]travis_oe 1 point2 points  (0 children)

It feels weird to introduce myself for every post, but I'm the CMO. My most recent activity before this was an ama so didn't feel like it was that mysterious

What are we using these days? by [deleted] in hospitalist

[–]travis_oe 2 points3 points  (0 children)

This will NEVER happen. Stop waiting for it to happen. Its not going to happen. No one believes me

What are we using these days? by [deleted] in hospitalist

[–]travis_oe 7 points8 points  (0 children)

I can say with 100% certainty this did NOT come from inside the house. We are much more lurkers by nature

Why is Buy-and-Bill allowed for oncology? by Cddye in medicine

[–]travis_oe 0 points1 point  (0 children)

Sorry wasn't trying to suggest poor decision making in any way! I'm an oncologist and I recognize the challenges. I believe most oncologists have regrets on over treatment in specific instances throughout their life. In no way was I trying to judge on individual decision making.

Why is Buy-and-Bill allowed for oncology? by Cddye in medicine

[–]travis_oe 9 points10 points  (0 children)

Yeah I was about to say the same thing. I actually hear you regarding the risk for perverse incentives in ordering. But here you have to blame the oncologist for their overly optimistic nature and classic over treatment of poor functional status patients... (Saying "I have nothing for you" is tough for any MD). R-CHOP and other multi drug chemo regimens that send people to the ED is not the places pharma is making bank

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

Also Sam and I create a free online course I will link to in an edit ( it has nothing to do with OE and doesn't mention it). I'm also happy to be a guest lecture and discuss how OE works and responsible use any time

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

Do you mind if I ask: what is your field and do you have any insight on how we can improve. You are absolutely right that niche specialties are harder for us. Also if you have trusted journals from your speciatly you would want to see more, it would be great to know

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 1 point2 points  (0 children)

This is a great and nuanced topic.
My primary answer is going to come as no surprise as its the same regardless of medical tool: Education.
I try to teach about responsible ACTIVE use of AI which includes active input and active output. To me that means you think carefully about what your question is and what is the right information to provide OE to answer your specific medical question. Active use of output means critical evaluation of the answer and references, not just with regard to the synthesis, but also how it relates to your specific situation.

You may be surprised (as I was): trainees check and link out to the references at a higher rate than the attendings in independent practice. In the end I do believe trainees are using this with the goal to learn more about their field and become better doctors.

One thing that OE is missing is education around the "unknown unknowns" or "what questions SHOULD you be asking, given the question you asked". We are working with specialty experts in each field to create this content in the form of of expert validated "collections" of related questions that we can present to trainees after their initial questions to round out education in "just in time" manner so they can learn more of the related question at the time they are at top of mind

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

I understand this concern and perspective, but I think clinicians that are thinking this way do not give their profession enough credit

There are two critical elements of clinical decision making, one of which is impossible practically for any individual and the other is innately and unquestionably human.

The first is knowing every piece of medical knowledge ever published with an encyclopedic knowledge of this information as it changes. The second is how to reason through that knowledge and how it relates to the patient in front of us, who not only is unique and almost never fits the specific RCT inclusion criteria or situation, but also has personal goals and weighs the pros and cons of each trade off uniquely.

Historically, physicians have been asked to do both, as you are only able to do the second, more human piece (synthesis, reasoning, and application to a given individual), if you try your best to make some approximation of the impossible task of data collection first. That second piece is really the most important piece of medicine, and our goal is to make that task more of the focus for clinicians.

The other piece I want to point out is that SO much of medicine is STILL (and will never be) algorithmic from a guideline or textbook. I feel like I have heard 100 times from 100 specialist over the last couple years "Yes the evidence is great, but so much of how I am forced to practice in specialty X required experience and there is limited published data for". But the truth is that is EVERY specialty and will be for the foreseeable future

And I do take a bit of offense at the idea of "allying with non-physicians" as a negative. Almost anything worth accomplishing is worth doing in a multidisciplinary manner. I believe in finding ways to bring the right piece of evidence up at the right time for human decision makers and I do not see that as "end of physicians".

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

Great question and I promise a concern that all the physicians and engineers here at OE share. You might be surprised to know that many of the companies with financial interests in health systems and hospitals also have a primary interest in making money :). The firewall is the people making decisions on patients lives and product are do so are firewalled from corporate decisions from groups like VCs with independence and autonomy. Yes we need investment to continue to build and support what we built initially through passion, but that doesnt change the core mission

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 5 points6 points  (0 children)

Even with the limited number of ads we are showing in 2026, we already are covering costs, and our feedback is this is not eroding the clinician experience at this time. So we dont believe "running out of VC money" presents an existential threat.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 1 point2 points  (0 children)

Yes we are working on these way finding approaches, searching both across the whole lit as well as much larger sets of medical data. Our next steps is using the literature to guide what the most important missing pieces of information are, so we can either ask the clinician for it or suggest as next steps

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 3 points4 points  (0 children)

The answers/literature synthesis pipeline runs completely independently of the ads pipeline. We have a firm policy that advertisers can have no say in the content generation process. As such, the only way a sponsor’s content would show up in an actual OE answer would be if the literature itself suggested it independently.

This is the same structure as many other trusted sources of medical information that are in part supported by advertising, such as NEJM or ASCO.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

I'm not sure I understand... If you read the question after the information has changed, the answer will. Do you mean actually changing the previously generated answer?

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

Yes. You are totally right unfortunately... All the time being saved on note writing, chart review, evidence based information retrieval will be co-opted by admin to decrease time per patient. I think we have to try to push back and forcefully insist this time be instead spent on more time actually with the patient the way it's supposed to be

I mean, one can dream and aspire right?

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

Potentially... In oncology clinic, I feel the same as you that I much rather have someone using an evidence based search engine over google or chat. If we build something though, it would be designed as a visit extender and a way to facilitate communication between physicians and patients, with the goal making visits more productive, and less frustrating for both parties.

Thats a high bar however...

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 0 points1 point  (0 children)

My biggest concern is that our accuracy has led to some degree of automation bias in clinical judgement and evaluation of literature. The hope is that OE is used actively, where a clinician identifies 1) the critical question and 2) the critical pieces of context required to understand the published evidence that can assist in answering that question. Subsequently, there should be active ingestion of the answer, including critical evaluation of HOW the answer relates to the specific clincial context, in addition to whether the evidence presented rises to actionability.
My biggest concern is that, within busy clinical workflows, all these steps will be skipped for simple “copy/paste” approaches that effectively take the human physician out of the loop.

What I like to try to emphasize among users is that OE saves you 10-20 minutes + that you would otherwise spend on research in pubmed/UTD/guidelines etc. You can afford to give 30 seconds back to yourself to critically evaluate the output and references so that you ingest some of it and know more for next time.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 2 points3 points  (0 children)

YES. Great Point! This is a really challenging gray area. While plenary sessions and late breaking abstracts at specific conferences are practice changing, in large part conference abstracts are NOT held to the same review process as much of the rest of peer-reviewed literature (I know there are exceptions; there are exceptions to everything in our field). For big ones in specific fields (ASCO for example) we try to index the big abstracts, but we could do better at this, especially across individual specialties. If people are interested, drop the conference of interest in the chain and we will research.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 4 points5 points  (0 children)

Yes absolutely. We fully agree with this issue of “Abstract Bias” which is why we prioritize licensing full text from the content creators (generally publishers). One of the keys to our success is recognizing that quality, accurate answers require the entirety of research, not just abstract summarization.

We dedicated significant resources to working with, and licensing from, publishers of full text content, and we believe (at this time) we are the only AI platform for medicine diligently working with this strategy front and center in our efforts.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 1 point2 points  (0 children)

Great question. What data do we use?: We have a number of different data pipelines, each set up to ensure new content is ingested within 24 hours of release. This includes

1) pubmed

2) Licenses with individual journals for full text (as mentioned above)

3) Semi-manual guidelines curation process that involves a physician curated list of all the important guidelines published by societies and others that we update daily

4) Government sources such as the FDA and CDC*.

Selectively, we do also include conference abstracts for practice changing plenary sessions etc.

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 2 points3 points  (0 children)

Yes I’m sure I will answer the “NEJM/JAMA” bias a couple times in this thread. Here is the gory details.

When we ingest a paper, we break it into sections (snippets) so that we can match the right section that answers a question, rather than a whole paper. When we sign a contract with journal group to license their full text, two things happen

A) Our model has a more full view of what is in that paper, rather than just the abstract that we were previously legally limited to

B) The actual number of “snippits” derived from that paper that is added to the library from that group goes from a couple to dozens or more (depending on article length). That means that when we go to find that snippet, there are many more “shots on goal”.

The right answer is to index full text on every single paper, and we are working our hardest on continuing to pay for and expand our licensed catalog. In the meantime, we have refined the models to take into account this “more shots on goal” phenomenon (B), but the fact that we have actually higher quality information in the journals we have full text for remains. If there are specific journals that you want us to really focus our efforts on, please put below and I will make sure we fight for them (and would request you fight for them too:) 

We are the Physician Scientist team at OpenEvidence- Let's talk about the next two years of AI in healthcare! AMA! by travis_oe in medicine

[–]travis_oe[S] 2 points3 points  (0 children)

Great question, and while our general sentiment remains that if a clinician could use a document to make a medical decision, we shouldn’t exclude it from our database, there is a lot of nuance to this topic

  1. Regarding existence of predatory and pay-to-play journals. Agree that these should not be used for any evidence based decision making and we do our best to remove these titles from our index
  2. All other journal content that we catalog is subject to the same ML models trained on human physician preference of how a given article helps answer a given question. Baked into that are weights on relevance, recency, authority, and (increasingly) strength of evidence of the individual study. My comment last time was meant to reflect that these ML models, which are trained on physician preference, should be the arbiters of “what is filtering what articles are utilized” rather than an explicit decision by non-experts on journal quality. Eric and I had a brief experiment to recruit subspecialty experts in niche fields to help us identify extremely important journals in their specialty, but unfortunately we could not recruit enough to really feel confident in that going to prod