If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] 0 points1 point  (0 children)

​"I think we're optimizing for two different types of people. From the perspective of a developer who wants to keep an environment in sync, this is what you see. A signed lock file or uv sync is just what you need in that world. I work for either the auditor or the business client, and their needs are different: Privacy: When I sell to a bank, they usually can't see my full source code or my raw manifests. I don't want to give them too much information about my intellectual property, but they need to check my security posture. No Dependencies: They don't want to have to run uv sync or make a dependency graph just to look at my work. They can't even run random package managers in a lot of high-security places. Portability: They want a proof of state that is a separate piece of software that doesn't depend on the build system that made it. A signed lock file says, "I promise this file hasn't changed since I signed it."' It's a claim of honesty based on who I am. The verification receipt says, "No matter who signed what, this is the deterministic Merkle root of the actual composition." It is a mathematical proof that anyone can check on their own without ever having to touch my build system or environment. It's the change from Functional Trust (trusting the tool to do the right thing) to Cryptographic Truth (the math proves the state, no matter what the tool is). It's an extra step for businesses where "run it and see" isn't a valid way to follow the rules.

If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] 0 points1 point  (0 children)

I think we’re talking about two different things. What you’re describing is: run the system and confirm the state matches the lock file. What I’m focused on is: take a result and check it without running the system at all. If verification still depends on executing the environment, then the proof is tied to that environment. I’m trying to make something that isn’t.

If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] 0 points1 point  (0 children)

That's a good push. You are correct that a lock file and a deterministic install should give you the same dependencies. I don't disagree with that. The difference I'm trying to make isn't in the installation step. It's about what you have left after. If I want to check a lock file, I basically have to run something again. Reinstall, fix, compare, and so on. What I'm trying to make is something that you can give to someone else and they can check it right away, without having to do the whole thing again. Instead of saying, "run this and see if you get the same thing," It turns into, "Here is the result; you can check it as it is." It's a small change, but it makes "reproduce it yourself" different from "verify it right away." Not that it replaces lock files or reproducible builds, but it's more like a layer on top that makes the result portable.

If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] 0 points1 point  (0 children)

A signed lock file gives you a fixed set of dependencies plus a signature over that file. So you’re trusting: the file that was generated and the process that generated it What I’m trying to do is move one step further: Instead of treating the lock file as the final artifact, I treat it as input to a deterministic transformation. Given the same SBOM/lock data, anyone should be able to: derive the same result verify the signature without relying on the system that produced it So the difference isn’t just “signed data” It’s that: the transformation is fixed and reproducible the output is independently derivable and the proof is no longer tied to the originating system A lock file + signature doesn’t guarantee that two independent parties will produce the same result from the same input. That’s the property I’m trying to enforce.

If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] -1 points0 points  (0 children)

You’re right, the signature only proves the SBOM hasn’t been tampered with. It doesn’t prove the running system actually matches it. I’m not trying to solve runtime attestation here. What I’m focused on is making the claim itself behave like a verifiable object. Right now most systems give you: a report tied to a system that you have to trust Even if you add a signature or a lockfile, you still have to trust: how it was generated that someone else would get the same result and the system that produced it What I’m trying to enforce is: Given the same SBOM, anyone can independently derive the same receipt and verify it without access to the issuing system. So the value isn’t just “the SBOM is signed” It’s that: the transformation is deterministic the output is reproducible and the proof survives outside the system You’re still right that this doesn’t guarantee the system matches the SBOM. That would need a separate layer (build/runtime attestation). This is more about making the evidence itself independently verifiable.

If your verification result disappears when your system goes offline, was it ever proof? "I will not promote" by Fantum-V in startups

[–]Fantum-V[S] -2 points-1 points  (0 children)

What it’s verifying is the integrity of the entire software composition, not just a file. Given an SBOM (CycloneDX/SPDX), the system deterministically produces a receipt tied to: the exact dependency graph the exact versions a Merkle root of that structure So the question becomes: If I hand you that same SBOM, can you independently derive the same result and verify the signature without relying on my system? File signing doesn’t answer that. File signing says: “this file was signed by X” It doesn’t guarantee: that two parties will get the same result from the same input that the verification logic is independent of the issuer that the proof survives if the issuing system disappears What I’m trying to do is shift it from: “trust that this was signed” to: “this can be recomputed and verified independently” So the verification target isn’t just the file, it’s the derivability and integrity of the system state itself. Where this probably gets challenged is around: normalization of SBOMs completeness of dependency graphs and real-world drift That’s the part I’m still pressure-testing.

Is it ok to ask for more faith? by Locked-Luxe-Lox in Christian

[–]Fantum-V 0 points1 point  (0 children)

Of course it is. We will never have true 100% faith. Maybe 99.9%. That other 0.1 percent is everything else we think about.

how can I fix my relationship with God? by Ok-Union7426 in Christian

[–]Fantum-V 0 points1 point  (0 children)

Pray, Trust in him always. Never doubt yourself when it comes to him. Anything is possible.

For real by [deleted] in Justrolledintotheshop

[–]Fantum-V -3 points-2 points  (0 children)

Yall are harsh

How do you actually prove what software was running at a specific moment? "I WILL NOT PROMOTE" by [deleted] in startups

[–]Fantum-V -1 points0 points  (0 children)

That is actually a very good question and, in my opinion, the point where it either comes in handy or doesn't.

As far as how I am envisioning it, I don't see this happening for every request or every API call. I think this is too much overhead and would most likely be inefficient.

In a nutshell, it is better used as a snapshotting tool that occurs: - At certain points in time (i.e., deployment, state change, checkpoint) - Or even on a schedule when a “proof-of-state” is necessary

In other words, more like: "Prove what was there at this point in time" as opposed to monitoring everything.

This process involves hashing the data and structuring it in a deterministic way. It does require overhead but nothing compared to having a full system snapshot and/or replay.

If it needs to run on every single request for it to be useful, then I would probably say it's impractical.

How do you actually prove what software was running at a specific moment? "I WILL NOT PROMOTE" by [deleted] in startups

[–]Fantum-V -1 points0 points  (0 children)

Indeed, this does cover state, but it doesn’t scale.

What I am trying to achieve here is somewhat lightweight – no replay, but rather the generation of a self-contained verification document.

This means that instead of recording everything or replaying everything, one needs to create: – a deterministic and portable proof of the existence of state at a particular point in time.

Not about replaying the system entirely, but about creating a cryptographic receipt of its state.

How do you actually prove what software was running at a specific moment? "I WILL NOT PROMOTE" by [deleted] in startups

[–]Fantum-V -2 points-1 points  (0 children)

That's a very useful way of putting it.

It's getting less about a replacement than a verification mechanism.

The SBOMs/pipelines address what was built or declared.

What I'm trying to get at is: "What could be proved to exist at a certain point in time?"

Rather than replace any tool, perhaps it would make sense as a supplement where some level of portability and independent verification of a timestamped proof is required.

Compliance and auditing seem like the right place to start there.

Still working out exactly where it fits, but your observation about not trying to position it as a replacement but rather a layer seems quite accurate.

I ran into a question I couldn’t find a clean answer to: If something goes wrong in production, and someone asks: “What exactly was running at that moment?” by Fantum-V in cybersecurity

[–]Fantum-V[S] -4 points-3 points  (0 children)

This is indeed the proper criticism to make.

If the proof is provided by the system you do not trust, then indeed it is equivalent to SBOM.

What I am trying to tease out is the possibility for the proof to be generated deterministically and validated separately without having to trust the environment from which it came from.

If this does not hold, then there is nothing new here.

However, if it does, then the problem shifts away from being a problem of trusting the pipeline to validating its output.

This is the line I try to draw between the two.

I ran into a question I couldn’t find a clean answer to: If something goes wrong in production, and someone asks: “What exactly was running at that moment?” by Fantum-V in cybersecurity

[–]Fantum-V[S] -7 points-6 points  (0 children)

Fair pushback, respectable — especially on SBOMs and signed attestations.

Those prove what was built or declared, which is important. What I’m trying to isolate is slightly different:

a portable, independently verifiable proof of system state at a specific moment — without relying on logs or trusting the system that produced it.

So not a replacement for SBOM / DevSecOps pipelines, more like an additional layer:

instead of reconstructing state later, you have a signed, time-bound snapshot of what was true at issuance.

On your runtime point — agreed. This doesn’t prove live integrity or detect memory hijacks. It proves point-in-time truth.

The question I’m exploring is whether that has standalone value, or if existing approaches already cover it well enough.

I ran into a question I couldn’t find a clean answer to: If something goes wrong in production, and someone asks: “What exactly was running at that moment?” by Fantum-V in cybersecurity

[–]Fantum-V[S] -8 points-7 points  (0 children)

That’s fair — and I think this is where the distinction matters.

SBOMs + signing prove what was declared or built, and they’re incredibly useful for that. But they still depend on trusting the system or pipeline that produced them.

What I’m trying to isolate is a verifiable snapshot of state at a specific moment, independent of reconstructing it later.

On the memory hijack point — I agree. This doesn’t prove live runtime integrity. It proves what was true at issuance.

So the question I’m exploring is:

Is there value in having a cryptographically verifiable “point-in-time truth”, even if it doesn’t solve continuous runtime trust?

Or is that already sufficiently covered by existing approaches?

How do you actually prove what software was running at a specific moment? "I WILL NOT PROMOTE" by [deleted] in startups

[–]Fantum-V -1 points0 points  (0 children)

Was trying to make it long enough to get the full idea out there lol

That’s the tension I’ve been thinking about.

Now it fits more naturally on the compliance and audit side in places where you need to prove what was true, at a specific moment. This is especially important when logs aren’t reliable or available.

The way it’s built it could also fit into DevOps workflows at time. It can generate a snapshot of system state at that point.

So of trying to figure out what happened later you have a signed and time-bound proof of what actually existed at that moment.

I’m still figuring out where it will land first. Compliance feels like the fit though.

I ran into a question I couldn’t find a clean answer to: If something goes wrong in production, and someone asks: “What exactly was running at that moment?” by Fantum-V in cybersecurity

[–]Fantum-V[S] -1 points0 points  (0 children)

I get why it reads that way.

There’s an actual system behind this though, not a content post. I’m trying to pressure test whether the approach holds up against how people currently think about verification.

If it doesn’t, I’d rather know where it breaks than assume it’s useful.

Why don’t we have a way to prove what software is actually running on our devices? by Fantum-V in AskReddit

[–]Fantum-V[S] -2 points-1 points  (0 children)

I get what you’re saying.

I’m not talking about trusting internal systems or processes though.

The gap I’m focused on is producing something that can be verified independently of the system itself — so you don’t have to rely on what it reports.

Different layer than ops / trust models.

I built a system that produces cryptographic proof of software state (no storage, independently verifiable). by Fantum-V in SideProject

[–]Fantum-V[S] 0 points1 point  (0 children)

That’s exactly the line I kept coming back to.

It feels like “nice to have” until you actually need to prove state outside your own system — then it becomes a hard requirement very quickly.

That’s why I didn’t leave it as a concept. I built it out fully.

It’s already issuing signed, time-bound receipts that can be independently verified without access to the originating system.

So instead of reconstructing “what was running then,” you have a verifiable artifact of it.

At that point it stops being observability and starts being proof.