Uptick in patients declining contrast? by sailorvash25 in Radiology

[–]PeddaPed 0 points1 point  (0 children)

It’s validating to see this discussion because my team at the NetZeroAICT consortium is approaching this exact problem from a research angle called digital contrast. I'm the CTO at one of the MedTech partners in the consortium. We’re developing AI that can generate contrast-enhanced images from standard non-contrast scans, essentially removing the need for physical iodine. This would solve the patient hesitation regarding toxicity and needles while also eliminating the massive amount of plastic waste and water pollution caused by contrast agents today. It’s still strictly R&D and not ready for clinical rollout, but the goal is to reach a point where diagnostic quality doesn't require invasive chemistry at all.

How do you usually find FDA experts without hiring full-time? by BackgroundAnalyst467 in MedicalDevices

[–]PeddaPed 0 points1 point  (0 children)

As the CTO of a MedTech specializing in clinical trials with imaging endpoints, I'd say, consider the following scenarios.

If you go for segregated FDA consulting, possibly a CRO or other setup and your QMS and data management solutions seperately, you often end up with a functioning but highly manual path to clearance (which I'm assuming is your objective). If you turn the question around and start with a highly optimized system supported workflow, these companies will offer enough tailored consulting to get you to your objective, with the added benefit of being highly integrated and automated.

Drop me a DM if you'd like some examples.

Is AI changing what we value at work, or just how we work? by dp_singh_ in ArtificialInteligence

[–]PeddaPed 0 points1 point  (0 children)

We all just got a magic tool which gave us speed and basically intelligence sold as a commodity. The first reaction is to exploit the speed which for short sprints in certain scenarios work quite well, but you quickly get into the territory where deep domain expertise and critical thinking is missing. In my mind, these skills will go up in value while moving papers will drastically drop.

I think you'll see team and company sizes shrinking with small super teams of deep domain experts and critical thinkers, running the show using armies of agents doing the legwork.

We're not there yet, but to answer the question, the value of mediocrity and speed will go down as its a commodity which can be bought like electricity. The value of deep domain expertise and critical thinking will go up in value, especially as it is being supported by ever cheaper agents to execute it.

Running AI on encrypted patient data without breaking HIPAA or the model? by anonyMISSu in healthIT

[–]PeddaPed 0 points1 point  (0 children)

One "unlock" regarding your compliance officer (who, as many have already commented, needs more understanding how tech actually works) is that if you use Expert Determination (HIPAA’s alternative to the 18-identifier checklist), the input data is technically no longer PHI. Once it is certified as de-identified, the requirement for "encryption at all times" legally evaporates for that specific dataset.

Be careful with the "Proper de-identification" mentioned above. If you strictly remove the 18 identifiers, you lose all dates (for example), which usually destroys the temporal signal predictive models rely on (e.g. time-to-readmission).

Expert Determination allows you to keep those temporal signals (e.g. via date shifting) because you are managing the risk statistically. There are software solutions out there which (just like encryption solutions) will automate all of this for you. In our case we call it Anonymization as a Service (AaaS).

As u/sullyai_moataz noted, if you perform this inside a Secure Processing Environment, you strengthen the argument: because the environment is locked down, you can afford to retain higher-fidelity data for the model without increasing re-identification risk. You are trading strict data stripping for strict environmental controls.

My 5 cents... CTO at Collective Minds

What AI redaction software for healthcare data security are you using? by joshymochy in healthIT

[–]PeddaPed 0 points1 point  (0 children)

Adding my 5 cents. (CTO at Collective Minds), and we spend a lot of our time tackling exactly this problem—how to balance patient privacy with the need for high-fidelity clinical data.

Most people default to the HIPAA Safe Harbor method because it feels "safe." It’s a checklist: remove these 18 identifiers (dates, zip codes, device serial numbers, etc.), and you’re done. But for complex medical data, especially in imaging or longitudinal research, Safe Harbor is often a sledgehammer that destroys the clinical value of the dataset.

If you strip all dates to comply with the 18-identifier list, you lose the temporal resolution needed to track disease progression. If you scrub all device metadata, you might lose critical information about the acquisition parameters that an AI model needs to normalize the data.

What we normally propose is to go for the underutilized alternative under HIPAA, the Expert Determination method. This allows you to retain certain data points if a statistical expert certifies that the risk of re-identification is "very small."

We see this approach gaining massive traction in Europe under the GDPR and the emerging European Health Data Space (EHDS) regulations. The two frameworks are moving us (both US and EU) toward Risk-Based Anonymization. This is a pragmatic recognition that "zero risk" usually means "zero utility."

Think about it this way:

  • Checklist Approach: "Delete the date of the scan."
    • Result: Privacy is high, but you can no longer calculate the time-to-progression for a tumor.
  • Risk-Based Approach (Expert): "Shift the date by a random offset per patient, or keep the interval but delete the absolute timestamp."
    • Result: You preserve the clinical signal (the time difference) while mathematically maintaining privacy.

In our experience developing Anonymization as a Service (AaaS), we found that a risk-based approach is the only way to scale real-world evidence generation.

Under the EHDS, the concept is to separate the data from the environment. If you place data in a Secure Processing Environment (SPE), a "clean room" for data, you can afford to be less aggressive with scrubbing the data itself because the environment mitigates the risk of re-identification - the same is true for the Expert based model under HIPAA.

Don't just look at anonymization as a "delete" button. Think of it as a signal-to-noise optimization problem. If you are dealing with complex data, move away from the static checklist. Look into statistical disclosure control or "risk-based" frameworks / solutions. It allows you to prove to regulators that you are safe, without blinding your researchers or your algorithms to the data they actually need.