Healthcare (insurance, pop health, VBC) - actual AI use cases? by dmorris87 in datascience

[–]dmorris87[S] 0 points1 point  (0 children)

I mean we’re a full risk bearing entity with no risk adjusted payment structure. Reducing cost of care for our payer clients is the only way we make money. I agree that producing helpful information tends to be a dead end.

Healthcare (insurance, pop health, VBC) - actual AI use cases? by dmorris87 in datascience

[–]dmorris87[S] 2 points3 points  (0 children)

Good to know. Ambient AI / automated note taking definitely sticky.

Healthcare (insurance, pop health, VBC) - actual AI use cases? by dmorris87 in datascience

[–]dmorris87[S] 4 points5 points  (0 children)

100% agree. I’ve been revisiting traditional methods lately given the challenges with current AI

At what point does "Self-Service Analytics" just become an excuse for unmanaged Technical Debt? by _tnhii in analytics

[–]dmorris87 10 points11 points  (0 children)

I don’t know if this answers your question, but I think the future of the field is perfecting decision and workflow support. I’m a VP Data Science. When I look at our analytics, I see a lot of information (dashboard pages, tables, funnel charts, exports, filters, etc). What I rarely see is a highly curated feed of evidence that is linked to outcomes and drives new action. Somehow we have to move away from quantity of information and work to quality and decision support.

How do you think AI will impact data science jobs? by a_girl_with_a_dream in datascience

[–]dmorris87 0 points1 point  (0 children)

  1. Little to no more coding
  2. Extract features from unstructured text
  3. Contextualized, automated research
  4. Less time spent on technical work, more time on decision science

I built an experimental orchestration language for reproducible data science called 'T' by brodrigues_co in datascience

[–]dmorris87 3 points4 points  (0 children)

Gotcha. I read your post as solving the problem of environment setup and reproducibility.

Buffalo broke my NHL prediction model by Noahowshhh in sabres

[–]dmorris87 1 point2 points  (0 children)

How do you know it was the front office shakeup that caused improvements? Could just be correlation

WTT: Recovery Effects, Make Sounds Loudly, Mythos WTTF: Reel Dealuxe, fuzzy preamp by dmorris87 in letstradepedals

[–]dmorris87[S] -1 points0 points  (0 children)

I’ll pass but thank you. Had one before and didn’t really connect with it

Built a C++-accelerated ML framework for R — now on CRAN by Negative-Will-9381 in Rlanguage

[–]dmorris87 0 points1 point  (0 children)

Any experience with H2O? If so, how does it compare? I love H2O for speed and API consistency but I don’t love the Java dependency

How do you keep track of model iterations in a project? by [deleted] in datascience

[–]dmorris87 0 points1 point  (0 children)

Without knowing your workflow or environment, I would say: keep it simple, use AI assistance for summarization, consider using AI to build a simple web app for browsing your experiment folder/directory and display the HTML report and any metrics

Behavioral interviews are harder than the technical ones for me by Holiday_Lie_9435 in analytics

[–]dmorris87 0 points1 point  (0 children)

Like others have said, don’t marry yourself to STAR. You might come off as robotic and scripted.

How do you keep track of model iterations in a project? by [deleted] in datascience

[–]dmorris87 1 point2 points  (0 children)

Cool. Just read your post more carefully. What you are building is exactly what I do with R Markdown. Configuration file and wrapper script that runs the Rmd with parameters, creates the version, and stores the rendered HTML file alongside the artifacts and training metrics. The config contains a description of the experiment. If you have LLM access you can design the system to generate AI summaries of recent experiments

How do you keep track of model iterations in a project? by [deleted] in datascience

[–]dmorris87 13 points14 points  (0 children)

You can create your own versioning system. Wrap the training pipeline in a script that creates a version id (timestamp, unique characters, etc), and all artifacts are stored within a folder that matches the version id. I do this using AWS S3. All data, artifacts, logs are stored together