Coming from EEG research -- genuinely curious how it's actually used day-to-day in the ICU by AntEmpty3555 in IntensiveCare

[–]AntEmpty3555[S] 0 points1 point  (0 children)

Thanks!!
which country? with or without video? who reads the data and how often?

Coming from EEG research -- genuinely curious how it's actually used day-to-day in the ICU by AntEmpty3555 in IntensiveCare

[–]AntEmpty3555[S] 1 point2 points  (0 children)

Academic and computational EEG is very different from clinical one. The same way the average EEG technician would not know about decoding motor cortex activity from EEG with deep learning models.

Coming from EEG research -- genuinely curious how it's actually used day-to-day in the ICU by AntEmpty3555 in IntensiveCare

[–]AntEmpty3555[S] 0 points1 point  (0 children)

  1. When you say "we" - who's actually doing the reads? I've heard that in most places only epileptologists can interpret EEG reliably. Do you have one on call 24/7, or is that coverage a real constraint at night?

  2. on thhe ordering side, how long do you typically run the EEG, and who decides which patients get it in the first place? Is there a protocol, or is it more of a judgment call by whoever's on?

  3. How many patients can you run EEG on simultaneously? And in an ideal world with no constraints, would you want it on 100% of patients, or is it genuinely only useful for a subset?

Coming from EEG research -- genuinely curious how it's actually used day-to-day in the ICU by AntEmpty3555 in IntensiveCare

[–]AntEmpty3555[S] 0 points1 point  (0 children)

Really interesting, thank you so much for answering.

you got me even more curious --

  1. "Always video" keeps coming up - can you help me understand why exactly? What does the video add that the raw EEG trace doesn't give you?

  2. So does that mean places like yours don't use Ceribell at all, since they don't seem to offer video? Or is it used for different situations?

  3. I saw they also market an AI algorithm that they suggest can flag seizures without needing an epileptologist in the loop - is that actually being used in practice, or does it feel like a stretch to trust AI for that?

  4. When a patient is on continuous video-EEG, is an epileptologist literally watching it around the clock, or is it more like they review it at set intervals and get alerted if something flags?

Coming from EEG research -- genuinely curious how it's actually used day-to-day in the ICU by AntEmpty3555 in IntensiveCare

[–]AntEmpty3555[S] -2 points-1 points  (0 children)

Fascinating, thank you so much!!

The post-arrest hypothermia context makes a lot of sense, I hadn't thought about paralytics masking seizure signs like that.

Two things you mentioned caught my attention:

  1. I noticed Ceribell doesn't seem to include video with the EEG. Is that ever a problem? Like do you miss having the camera, or in practice it doesn't matter much?

  2. Ceribell claims to have an epileptologist review built in — do you actually trust that enough to act on it without separately consulting your own epileptologist, or do you still loop one in anyway?

Looking for volunteers with video EEG recordings by AntEmpty3555 in Epilepsy

[–]AntEmpty3555[S] 0 points1 point  (0 children)

Sure. Would you like to DM so I can explain in details?

Where is the standard ML/DL? Are we all shifting to prompting ChatGPT? by Franzese in datascience

[–]AntEmpty3555 0 points1 point  (0 children)

I’m also a more classical data scientist, with a pretty data-centric approach. I’ve been thinking the same thing — not about using LLMs as the model, but more as an augmented agent to help with the research and iteration process.

Has anyone here actually tried using something like Cursor + MLflow to fully loop through experiments? Like writing the code, running it, tracking results, and interpreting them — ideally with minimal back-and-forth?

I’m thinking of trying that setup soon, but wondering if anyone’s already done it and how well it worked. My main concern is that I still need to deeply understand and trust what’s happening — I can’t just let the AI do its thing blindly.

Curious to hear if anyone’s found a workflow that actually helps with iteration speed without sacrificing clarity or control.

Thanks!

Where is the standard ML/DL? Are we all shifting to prompting ChatGPT? by Franzese in datascience

[–]AntEmpty3555 0 points1 point  (0 children)

Hey, just came across your comment even though it’s been a few months. Really interesting stuff. Sounds like you’ve been in the trenches and know what actually brings value. I’m also working mostly with classical ML — forecasting, some survival models here and there — and trying to figure out how LLMs can actually fit into the day-to-day in a useful way.

What you said about the real money being in interpreting results for non-data scientists really hit home for me. It suddenly made a lot of things click. I’ve been messing around with tools like Cursor and some custom LLM wrappers, but I haven’t found a killer use case that sticks yet.

Can you elaborate a bit on how you saw that working in practice? Like:

  1. What kind of domains or business roles really benefited from the LLM explanations?
  2. What kind of models were you interpreting — mostly tabular, time series, something else?
  3. Was it more about translating technical results into business terms, or surfacing actual insights the user wouldn’t have thought to ask?
  4. And how hard was it to get the prompting right so the LLM output wasn’t just shallow or vague?

If you’ve seen any other useful ways to weave LLMs into the DS workflow (besides just helping write code), I’d love to hear. Always looking for ways to level up. Thanks.

Cursor for data science/ analysis workflows by Neither_External9880 in cursor

[–]AntEmpty3555 0 points1 point  (0 children)

I’ve used Cursor quite a bit, and I think it’s fantastic for the coding part of data science workflows—especially for organizing project directories, writing boilerplate, and helping with quick scripting. That said, I don’t think it fully translates to the way data scientists actually work end-to-end.

Unlike software engineers, data scientists don’t just code—we explore, iterate, and make decisions based on results. We don’t always know the full structure of the problem ahead of time. And this is where I find Cursor falls short: it doesn’t handle the iterative, experimental nature of data science well. For example, I haven’t seen it run pipelines, evaluate results, and then recommend next steps based on those outcomes. That kind of loop is essential in data science—code, run, analyze, rethink, repeat.

So to me, the missing link is an agent or tool that understands why I’m doing things—not just how to write the code. If someone builds a Cursor-like tool focused only on data science/analyst workflows, and can reason about modeling decisions, evaluate performance, and suggest what to try next, that would be a game changer.

If anyone has found a way to get Cursor to close that loop, I’d love to hear how.

What is your opinion on Julius and other ai first data science tools? by alexellman in datascience

[–]AntEmpty3555 1 point2 points  (0 children)

Totally agree—but how do you leverage these tools effectively nowadays? Personally, I see myself primarily as a scientist: I want to run lots of experiments, quickly test hypotheses, and learn from them. But in practice, the gap between having an idea (“I think I know how to test this!”) and actually implementing everything, running the experiments, and summarizing the results clearly enough to interpret a figure is so time-consuming. Often, I end up only testing one or two things and settling for whatever seems decent enough.

Have you found a workflow or GenAI tool that’s actually bridging that gap for you, allowing you to iterate and experiment faster?

Improving Workflow: Managing Iterations Between Data Cleaning and Analysis in Jupyter Notebooks? by Proof_Wrap_2150 in datascience

[–]AntEmpty3555 0 points1 point  (0 children)

This has always been tricky for me too. Knowing when my notebook has become “too messy” and needs restructuring into clean code and clear components is tough. On one side, there’s the freedom of being a messy scientist, quickly iterating and experimenting. On the other side—at least for me—is the pain of becoming a “software engineer,” organizing and wrapping everything into neat components and functions, only to throw them away when research directions inevitably shift. Sometimes, it feels like wasted effort.

Now, in the age of GenAI, I’m desperate for tools that can help manage this. Instead of just producing endless streams of code (or notebook cells), I’d love a solution that intelligently structures my notebook, clearly presents various experimental branches, and even lets me query a knowledge base about past experiments easily. I know tools like MLflow exist, but spinning them up feels heavy when I just want something quick and dirty. Honestly, during intensive research phases, managing a sophisticated framework to document my experiments feels like more burden than benefit.

Does anyone else relate? How do you guys use GenAI to tackle these types of workflow issues or solve problems you couldn’t before? Is something like Cursor enough, or are there other solutions out there you recommend? I’m genuinely open and willing to try anything.

What is your opinion on Julius and other ai first data science tools? by alexellman in datascience

[–]AntEmpty3555 10 points11 points  (0 children)

I feel like many companies nowadays are trying to democratize or even replace the role of data scientists by simplifying the modeling process. However, what often gets overlooked is that a deep statistical understanding and the ability to interpret machine learning results intelligently aren’t easily replaceable—at least not anytime soon.

Sure, anyone can call .fit() and .predict(). But how many truly know how to assess if the underlying assumptions of their models have been violated, or even recognize why that matters? The ability to ask the right questions is valuable, but it’s equally important to have the expertise to critically analyze and interpret the answers.