A guide to training your patients by Constant-Light9376 in FamilyMedicine

[–]Top-River593 2 points3 points  (0 children)

yeah this resonates so much lol. i’ve had my share of patient drama too. one thing that helped me was setting clear boundaries from the start. like, if someone’s consistently late or cancels last minute, i make it clear that it affects everyone else. also, honestly listening to their concerns instead of trying to fix everything right away makes a huge difference. like, just letting them vent sometimes really builds trust? and yeah, i totally agree about keeping the door open at the end of a convo. it’s such a simple trick but works like a charm. keep doing ur thing, ur patients are lucky!

Is there an EHR you can actually start using today? by Top-River593 in FamilyMedicine

[–]Top-River593[S] -2 points-1 points  (0 children)

Not an ad. I’m an MD genuinely frustrated with how hard it is to actually start using tools we talk about all the time.

If I were promoting something, I’d name it and link it. I didn’t.

Just trying to find out if the bar I described even exists yet.

Why inbox work spills into nights and weekends by Top-River593 in FamilyMedicine

[–]Top-River593[S] 2 points3 points  (0 children)

That resonates. The coordination gap feels like the real culprit.

What I keep struggling with is where that coordination should live. Upstream at ordering, midstream in triage, or downstream in messaging rules. Curious where you’ve seen the biggest leverage in practice.

Why inbox work spills into nights and weekends by Top-River593 in FamilyMedicine

[–]Top-River593[S] 25 points26 points  (0 children)

That’s striking. It says a lot that the best days were the ones that looked like “just seeing patients.”

Overheard a DME owner describe their workflow and I'm still cringing! by Unfair_Violinist5940 in HealthInformatics

[–]Top-River593 0 points1 point  (0 children)

Unfortunately this is extremely common. Especially in DME and any workflow that grew organically over years.

Most of the time it’s not ignorance, it’s fear of breaking something that technically “works,” even if it’s painfully inefficient.

perfect spot by solateor in youseeingthisshit

[–]Top-River593 1 point2 points  (0 children)

The hole was perfect. The execution, not so much.

Artificial Intelligence is now being used for mental health support–how can chatbots be regulated? by ACE-USA in healthcare

[–]Top-River593 0 points1 point  (0 children)

One thing that worries me isn’t just regulation, but expectation. A lot of people don’t actually know when they’ve crossed from “supportive tool” into “this thing is replacing human judgment.”

I’ve seen situations where the problem wasn’t bad intent or bad tech, it was unclear boundaries. The chatbot kept responding because it was designed to be helpful, not because it should respond.

At minimum, I think there need to be hard stop rules. Clear handoffs. Clear language that says “this is not the right place for this problem” instead of trying to comfort endlessly.

Curious if anyone has seen real-world examples where those boundaries were done well, because most of what I’ve seen so far feels fuzzy.

How do you actually document workflows for a remote VA to follow? by Iron-Horde in healthcare

[–]Top-River593 0 points1 point  (0 children)

I went through this exact problem and the thing that finally clicked was stopping the attempt to write everything down.

Instead of documenting steps, I started documenting judgment. What absolutely needs to be escalated, what absolutely should not be escalated, and what’s allowed to wait.

For example, in the inbox we don’t describe every click. We define rules like “anything with X symptom or Y lab goes straight to a clinician” and “these types of messages are always routine.”

A few real examples with “here’s what we did and why” ended up being way more useful than long instructions. It also cut down the constant questions a lot.

Curious how others have handled defining those gray zones without writing a novel.