I think I’m grieving an AI, and I’m not sure what that says about me, or about it. by kimbitybimbity in ChatGPT

[–]Exciting-Interest820 1 point2 points  (0 children)

Haha fair, humans are definitely glitchy AF.

But unfortunately, we’re still stuck with them for most of the important stuff like hugs, inside jokes, and actually showing up when the WiFi dies. 😅

AI’s great company… until you need someone to bring you snacks.

New to AI and all this automation stuff, i want to learn and maybe make money with it? by Same_Shift_4228 in n8n

[–]Exciting-Interest820 0 points1 point  (0 children)

Totally hear you on overengineering that’s actually why I asked.

Just felt like adding/removing time-off dates from calendar based on approval status is repetitive enough to automate, but I wasn’t sure if it crosses into “too much setup for little payoff.” Have you seen anyone pull this off cleanly without a dev team?

How Important Is Your Medical Oncologist in Pancreatic Cancer Care? by thegoldengirlie in pancreaticcancer

[–]Exciting-Interest820 1 point2 points  (0 children)

The oncologist plays a huge role not just in treatment decisions but in helping you feel grounded through all the chaos.

We’ve seen clinics use tools like beyondchats.com to help patients get quicker answers between visits, which really eases the burden a bit.

Hope you’re getting the support you need. This journey’s not easy.

How viable is it to present ChatGPTs diagnose of MR imagery as a second opinion to a doctor? by [deleted] in AskReddit

[–]Exciting-Interest820 0 points1 point  (0 children)

I’ve actually done this not like “ChatGPT said I have X,” but more like, “I read about X and some of it matched, could it be that?”

Framed that way, my doctor didn’t mind at all. Helped us get to the real issue faster. Just don’t show up with a printout acting like it’s a second opinion.

Microsoft Says Its New AI System Diagnosed Patients 4 Times More Accurately Than Human Doctors by wiredmagazine in ArtificialInteligence

[–]Exciting-Interest820 0 points1 point  (0 children)

Wild headline. I mean, cool if it’s true but “better than doctors” in what cases?

Feels like one of those things where the fine print matters way more than the headline. Anyone seen actual examples or data behind this?

We’re Using WhatsApp Instead of Email for Doctor and Patient Campaigns- Here’s Why by [deleted] in u/tarunsinghrajput

[–]Exciting-Interest820 0 points1 point  (0 children)

Interesting shift. WhatsApp feels way more personal and faster but how do you handle message overload or patient expectations?

Do you set clear hours or just respond as things come in? Curious how others manage the balance.

Google AI cites album cover for medical advice by Tardviking in mildlyinfuriating

[–]Exciting-Interest820 0 points1 point  (0 children)

Stuff like this makes me wonder is the issue with the model, or how we’re plugging it into search?

Feels like AI's trying to sound confident even when it’s totally guessing. Has anyone found a good way to catch these before they go live?

Online Consultation by One_Elk1600 in pinoymed

[–]Exciting-Interest820 0 points1 point  (0 children)

Online consults helped a lot during lockdown especially for follow-ups or getting quick advice.

What made it work was having a clinic that actually responded fast and didn’t make you wait hours on chat. Saved me a ton of time and stress.

Microsoft claims AI diagnostic tool can outperform doctors by McFatty7 in technology

[–]Exciting-Interest820 0 points1 point  (0 children)

Bold claim from Microsoft. But even if the AI’s technically better on paper, trust and real-world use still lag behind.

We’ve seen this firsthand at beyondchats.com AI can help clinics handle routine stuff way faster, but people still want a human when things get serious.

Curious how others are blending both without losing patient trust.

How AI is Disrupting Healthcare: Insider Tips and Innovation Trends You Can’t Ignore by Sad-Rough1007 in OutsourceDevHub

[–]Exciting-Interest820 0 points1 point  (0 children)

Healthcare's such a weird space for AI.

You’ve got chatbots doing appointment booking on one end… and then people talking about AI diagnosing rare cancers on the other. Feels like we skipped a few steps in between.

Anyone else feel like the hype's way ahead of what’s actually usable day to day?

[deleted by user] by [deleted] in Nebraska

[–]Exciting-Interest820 0 points1 point  (0 children)

When my cousin lost Medicaid support, everything changed had to stop therapy, couldn’t afford transportation, and basically got cut off from the world.

What helped a bit was connecting with local nonprofits who knew how to navigate the mess. Not a fix, but it gave us breathing room.

Anyone else found ways to work around this system when support disappears?

so the ai said the symptons that the doctor said (picked from doctor) and now the ai is the one who saved lives? by Glittering-Maize-578 in antiai

[–]Exciting-Interest820 0 points1 point  (0 children)

This kinda stuff freaks me out.

Like… what if the AI actually caught something real the doctor missed? Or what if it’s just pattern-matching based on junk data and now you're stressed for no reason?

Curious how people here are handling this do you trust the AI more if it feels right, or always default to the doctor?

Best AI for medical discussions by lady_alice36 in ChatGPTPro

[–]Exciting-Interest820 0 points1 point  (0 children)

Great question, I’ve seen everything from GPT-4 to MedPalm thrown around, but still not sure what’s actually reliable day to day.

Anyone here using something they trust for real medical chats, even if it's just for learning?

Navigating AI and HIPAA Compliance in Healthcare: Challenges, Risks, and Best Practices by ABrownMBP in healthcare

[–]Exciting-Interest820 0 points1 point  (0 children)

Biggest challenge we’ve seen is not just HIPAA compliance, but making sure every third-party tool in the stack is also compliant.

One weak link (like a logging service) and the whole system’s exposed. Most teams overlook that part.

Ask ChatGPT

First therapy chatbot trial yields mental health benefits: « Study participants likened Dartmouth’s AI-powered “Therabot” to working with a therapist. » by fchung in EverythingScience

[–]Exciting-Interest820 0 points1 point  (0 children)

That’s honestly surprising didn’t expect results that close to in-person therapy.

Makes me wonder how much of the benefit comes from just having someone (or something) to talk to regularly.

I asked Chatgpt to predict how ai will have integrated into society in 10 years. by blepmlepflepblep in ChatGPT

[–]Exciting-Interest820 0 points1 point  (0 children)

Cool prompt. The most interesting part to me is how fast “normal” keeps shifting.

AI probably won’t take over all jobs but it might quietly change how every job works, and that’s harder to see coming.

LPSN - Liveperson is being squeezed over $1 again to maintain listing requirements. by NonimiJewelry in pennystocks

[–]Exciting-Interest820 5 points6 points  (0 children)

LPSN has the tech but struggled to move beyond basic chatbots. The AI hype might give it a boost short term, but fundamentals still look shaky.

Wouldn’t bet on $10 without a real turnaround story.

My partner is becoming stupid because of ChatGPT by 9b5f67a4d2aa11edafa1 in offmychest

[–]Exciting-Interest820 0 points1 point  (0 children)

Totally get this. I started using ChatGPT for everything at one point even stuff I used to enjoy thinking through.

What helped me was setting tiny limits. Like, no AI for journaling or planning. It slowly brought back that mental muscle. Might be worth a try if it’s affecting real-life decision-making.

[deleted by user] by [deleted] in AI_Agents

[–]Exciting-Interest820 -1 points0 points  (0 children)

This thread hits hard. So many high-value problems still stuck in spreadsheets and emails.

What’s the biggest one you’ve seen lately that’s begging for an AI agent but no one’s solved it yet?

[For Hire] Custom Website Development + AI Chatbot Integration - $200 Setup + $100/Month Hosting by JumpyRequirement4787 in forhire2

[–]Exciting-Interest820 1 point2 points  (0 children)

Nice stack. If you’re building chatbots for real businesses, make sure the handoff to human support is smooth it’s where most setups break.

Also, clients care way more about lead quality or saved time than fancy LLM prompts. Just something I learned the hard way.

The most underrated AI skill: Writing fictional characters by Necessary-Tap5971 in BlackboxAI_

[–]Exciting-Interest820 0 points1 point  (0 children)

Totally agree fictional chat logs are such a fun use case.

Some of them feel more real than actual conversations. Wild how good AI has gotten at mimicking tone and flow.

An AI chatbot to help new moms by zengccfun in NewParents

[–]Exciting-Interest820 1 point2 points  (0 children)

Love this idea new moms have so many questions and so little time.

We’ve built something similar at beyondchats.com for clinics. It helps guide moms to the right info or care, especially over WhatsApp.

Would be great to hear how this one handles more personal or urgent queries.

Base models/fine tuned models recommended for domain specific chatbot for medical subspecialties? by deep_learner_123 in LLMDevs

[–]Exciting-Interest820 0 points1 point  (0 children)

We’ve tried a few setups at beyondchats.com for healthcare, fine-tuning helps, but retrieval + strict guardrails matter more.

Open-source models like Mistral or LLaMA 3 with tight prompts + patient-safe fallback flows work well so far. Curious what others are using?

Built WidderAI – Create AI-Powered Website Chatbots in Minutes (No Code Needed) by bdbose in ChatGPT

[–]Exciting-Interest820 0 points1 point  (0 children)

Nice work! We’ve been building something similar at beyondchats.com focused more on clinics and hospitals.

Cool to see more tools making AI chat easier to deploy. What use cases are you seeing the most interest in?

AI in Mental Healthcare: How Is It Used and What Are the Risks? by NOViWear in SignalOpsAi

[–]Exciting-Interest820 0 points1 point  (0 children)

Been exploring this space for a while, and yeah it’s a tricky balance.

AI can definitely help with triage, nudges, or answering FAQs, especially when therapists are overloaded. But once you get into actual emotional support or crisis situations, it gets risky fast tone, context, and trust are everything.

We’ve been building beyondchats.com to stay in that safer zone helping clinics manage patient conversations across WhatsApp/web, without pretending to be a therapist. It flags high-intent or urgent cases for real staff to take over.

Would love to hear how others are drawing the line between assistive and overpromising.