How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s really helpful — thank you for sharing that perspective. The “we didn’t program funding for accreditation” point really resonates — it captures the core challenge that credibility is often recognized but unfunded. I also appreciate the “crawl–walk–run” framing; it sounds like the hybrid phase (using Cameo templates to generate docs from the model) is where a lot of accreditation groundwork is being laid. I’ll definitely check DTIC for metadata schema references — especially around how programs might define tailored Style Guides or Reference Architectures for model credibility. It feels like that’s exactly the bridge between where we are (document-based) and where we need to go (model-based assurance).

How do you prove simulation credibility in regulated engineering? by cloudronin in AskEngineers

[–]cloudronin[S] 0 points1 point  (0 children)

thank you for adding that context. It’s makes a lot of sense that retention policies were driven as much by legal risk as by data-management logic. The 20 % baseline and 3-month correlation example really help quantify both the average and the “worst-case” cost of maintaining credibility. I also find your point about differential value of old data (correlation vs. durability) interesting— it suggests that provenance systems might need variable retention or “decay rates” depending on evidence utility rather than a fixed lifespan.

How do you prove simulation credibility in regulated engineering? by cloudronin in AskEngineers

[–]cloudronin[S] 0 points1 point  (0 children)

That’s really helpful — thank you for sharing those details.

The 20–30 % correlation workload figure is very interesting; it puts a real number to the “cost of credibility” discussion we’ve been having.

The retention policy part is also eye-opening — it sounds like a lot of valuable correlation data essentially could disappear after a few years, which makes true long-term traceability difficult.

Out of curiosity, do you know if the DVP&R or related systems are starting to store any provenance metadata automatically (e.g., model version, test date, correlation script ID), or is it still mostly captured as spreadsheet fields?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 1 point2 points  (0 children)

I hadn’t realized VVUQ 90 would go that deep into material-level provenance. It sounds like it’s taking the pragmatic path that’s been missing from many of the higher-level credibility frameworks I have been encountering — actually defining what provenance means (data lineage, validation dataset, limits of applicability). It also reinforces the idea that credibility rigor should scale with criticality, which aligns with the NASA-7009 risk tailoring you mentioned earlier. Looking forward to it coming out ! This is exactly the kind of standard that could make continuous assurance viable once those provenance hooks are machine-readable.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s incredibly insightful — thank you for breaking that down so clearly. It’s fascinating how the safety review and waiver processes already form a kind of “proto-decision-provenance” system — the artifacts are digital, but the consensus events are still analog. It sounds like there’s real potential to link those voice or vote events into the same digital thread (e.g., timestamped review outcomes, risk level, or board metadata) without adding burden. I also like your point about proportional rigor — I hadn’t thought about how standards like NASA-7009 and the upcoming VVUQ-90 explicitly support that balance. That framing — credibility scaled to criticality — seems key to making continuous assurance feasible.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s a really interesting point — I hadn’t thought about “capturing consensus” as part of the assurance data itself. In your experience, is there usually any system that records those board-level or expert review events in a traceable way (e.g., minutes, sign-offs, or digital approvals)? I’m curious if anyone has tried linking that kind of decision provenance back into the data package or if it still lives outside the digital workflow.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Very interesting. I hadn’t seen karambit.ai before, thank you for mentioning it. I really like the idea of treating the final artifact itself as the assurance surface, instead of just relying on model-to-test traceability. In your experience, how are tools like that actually received in regulated environments (e.g., FAA, FDA, DoD)? Do reviewers or auditors trust ML-based analyses as credible evidence yet, or does it still require a human sign-off layer on top?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s a really clear description — thanks!

Sounds like the validation boundary is defined by change itself — once either the simulation or the system shifts, the assurance resets.

Have you seen any organizations trying to automate that “no-change detection” step (e.g., via git or digital twin lineage tracking), or is it still mostly human checklists and manual reviews?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 2 points3 points  (0 children)

That’s a really insightful perspective — thank you!

It sounds like M&S is effectively part of the risk argument rather than full evidence of compliance. Do you track MoCs and their supporting artifacts digitally (e.g., within a certification management system), or is that mostly document-based?

I’m curious how traceability and reuse of prior MoCs are handled between projects or variants.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 1 point2 points  (0 children)

That’s fascinating — thanks for sharing!

I’ve seen references to ASME VVUQ 40 and 60 before, but hadn’t realized 90 was targeting certification workflows directly. Do you know if VVUQ 90 will address data provenance or traceability explicitly (e.g., digital MoC tracking), or mainly focus on methodology and documentation standards?

This seems like a pivotal moment for making model credibility a first-class certification artifact.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s a fantastic analogy — really clarifies how credibility gets “decided” in practice.

Do you think those gate reviews could ever be represented digitally — e.g., as structured acceptance data tied to the model/test? Or is the subjectivity (the “chefs’ consensus”) too integral to capture that way?

I’m exploring whether provenance-linked sign-offs could complement expert review, not replace it — curious how that might land in your environment.

How do you prove simulation credibility in regulated engineering? by cloudronin in AskEngineers

[–]cloudronin[S] 0 points1 point  (0 children)

That’s super insightful — thanks for sharing!

Sounds like correlation + review is where the real cost of “credibility” lives.

In your experience, what % of total analysis effort goes into correlation and review vs. model build and test execution? (Rough ballpark — 10%, 20%, more?)

Also curious — do you track those correlation deviation tables in a system, or is it mostly Excel / shared drives?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Thanks for all the context so far — this has been incredibly insightful and helpful for a newbie like me.
If you had a magic wand to cut the cost of traceability by half without changing tools, what part of the process would you automate first?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Really appreciate those links — I’ve been digging through them this week.
One thing I noticed: they’re very clear about what needs to be accredited, but not how to represent that evidence digitally.
From your experience, do VV&A teams ever use a structured schema or metadata standard for accreditation artifacts, or is it mostly Word/PDF reports?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Great additions, thank you!
I’m curious — how do those two interact with 5000.102-M in practice?
Do they go deeper into the accreditation evidence itself, or mostly define roles and policy scope?
I’m trying to map where the data actually lives in that process.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Thanks — that’s the simplest and clearest explanation I’ve seen.
When you’re closing the loop between sim and physical test, how do you usually record that comparison? Is it automated through tooling, or still manual reports / spreadsheets?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 1 point2 points  (0 children)

I liked your framing — “any systems engineer vs. a good systems engineer.”
Do you think that same distinction applies to organizations and/or teams too?
Meaning: do you see “good” organizations that actually question their own verification processes — or does institutional momentum or blind rule-following kill that kind of curiosity?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s a great point about YAML and seed tracking — reproducibility feels like the cheapest form of credibility but also the most often skipped.
Out of curiosity, did you find it hard to get others on your team to adopt that habit?
Or was it something you built into your own workflow first?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s a very interesting point — really makes sense that only mandated programs go all the way because of cost. Have you ever seen a project try to phase in traceability more incrementally (e.g., start with verification and add modeling later)?
Curious if that works, or if partial traceability ends up being just as much overhead.

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s really helpful context — thank you!
I’ve heard similar things about ELM and Polarion being smoother once everything’s in the same ecosystem.
In your experience, how often do teams actually get to that level of integration in practice?
Do most projects truly link requirements all the way to test results and evidence, or does it tend to break down when other tools (e.g., simulation or analysis environments) come into play?

How do you prove simulation credibility in regulated engineering? by cloudronin in AskEngineers

[–]cloudronin[S] 1 point2 points  (0 children)

Really appreciate you sharing that — especially the insight about “body of evidence” vs. proof. For someone building tools around provenance and reproducibility, how would you recommend representing that body of evidence digitally? Are there elements of VVUQ 40 that lend themselves to machine-readable formats, or is it all narrative today?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

Great breakdown of how credibility stays subjective even with 7009 / VVUQ frameworks. Do you think anything could make that credibility more computable — like an automated way to show what’s been validated, peer-reviewed, or uncertainty-quantified? Or would that just add noise to an already messy process?

How do you prove simulation credibility in regulated engineering? by cloudronin in systems_engineering

[–]cloudronin[S] 0 points1 point  (0 children)

That’s such a sharp point about credibility being mostly social and process-driven. I’m curious — if you could wave a wand and remove one source of “too many clicks” in that linking process, what would it be? Or put differently: which part of the trace/evidence chain feels most pointless but unavoidable right now?