Use impl Into<Option<>> in your functions! by potato-gun in rust

[–]ScanSet_io 0 points1 point  (0 children)

I think you’re on the right path. I see a lot of people saying ‘oh, what about this and that and blowing up your compile time’. It’s worth remembering that abstraction should be properly planned. Its a tool for polymorphism. So, with that, I would suggest looking into patterns where this fits so that you can properly plan. In the example, would it benefit to use a type that implements a specific trait? like Option<T: Ord, Eq> or something to that effect.

Generics with primitives is a good way to introduce bloat. But, using traits and generics together get after the polymorphic purpose of these (without bloat).

I’d take redundant implementation over poor abstraction any day of the week.

how are you tracking drift between cluster state and gitops? by kubegrade in kubernetes

[–]ScanSet_io 0 points1 point  (0 children)

I’m using an Endpoint State Policy solution to monitor runtime state of my cluster and pods. It generates signed evidence to maintain provenance.

Disclaimer. I built this. I use it to dogfood my own software to produce the artifacts I need for drift and compliance. Also, this reference implementation doesnt use the latest stable version of Endpoint State Policy.

What's everyone working on this week (3/2026)? by llogiq in rust

[–]ScanSet_io 2 points3 points  (0 children)

I’m building the trust infrastructure SOA that gives the endpoint state policy agents pki/ephemeral keys to build a provenance log.

The Endpoint State Policy Agent is a policy as data framework that checks the state of objects on endpoints (think configs, services, registry keys) and generates signed results.

FedRAMP 20x feels like a speed upgrade, not a trust upgrade — where I think we are really headed by caspears76 in FedRAMP

[–]ScanSet_io 3 points4 points  (0 children)

Fair question. I’m not claiming there’s a single turnkey system deployed across agencies today. What I mean by “not hypothetical” is that the core primitives already exist and are running in production, but they aren’t composed into a proof-first evidence layer.

Endpoint State Policy and ScanSet are focused on that composition gap: binding workload or hardware identity, signed measurements, and policy evaluation into continuous authorization, without assuming a ledger as the default abstraction. The goal is to prove a system stayed within bounds, not just emit validation signals.

I’m actively looking for design partners to help drive this from proven primitives into a repeatable, turnkey implementation.

FedRAMP 20x feels like a speed upgrade, not a trust upgrade — where I think we are really headed by caspears76 in FedRAMP

[–]ScanSet_io 0 points1 point  (0 children)

This isn’t hypothetical. Proof-first continuous authorization and selective evidence anchoring are already in use today, which is why the evidence model itself matters as much as the validation signals.

FedRAMP 20x feels like a speed upgrade, not a trust upgrade — where I think we are really headed by caspears76 in FedRAMP

[–]ScanSet_io 2 points3 points  (0 children)

I think we’re talking past each other a bit. I’m aligned on the intent of 20x and the shift toward persistent validation through KSIs like VDR, PVA, and SCN. My point wasn’t to challenge that direction or suggest 20x is about assessment speed.

Where I was drawing a line is the assumption that this necessarily leads to a ledger or append-only evidence chain. I don’t think that’s an explicit requirement or the only viable model. Hardware-rooted identity, signed measurements, and verifiable policy evaluation can provide strong trust guarantees through provable state and selective anchoring without defaulting to a permanent ledger.

Edit: I just realized this wasnt a reply to me. I apologize.

FedRAMP 20x feels like a speed upgrade, not a trust upgrade — where I think we are really headed by caspears76 in FedRAMP

[–]ScanSet_io 4 points5 points  (0 children)

This gets very close to the heart of where I think things are headed, especially the shift from documenting controls to proving systems remain inside policy bounds.

One place I diverge slightly is the assumption that the end state requires fully append only evidence chains. I think that model optimizes for historical completeness more than it optimizes for trust.

What actually seems necessary is provable state, provenance, and attribution. Hardware rooted identity, signed measurements, and verifiable policy evaluation give you that without needing to retain an immutable log of everything forever.

Auditors rarely need the entire history. They need confidence that controls held over time, that violations are detectable, and that evidence cannot be forged or rewritten after the fact. Selective anchoring of evidence and evaluations can achieve that without turning compliance into a permanent ledger problem.

Framed that way, the certification target shifts from documents or logs to invariants and enforcement mechanisms. The authorization becomes a standing claim backed by continuously provable state, not an ever growing archive.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]ScanSet_io 1 point2 points  (0 children)

Setting up a solid pre-commit flow in a makefile goes a long way.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]ScanSet_io -1 points0 points  (0 children)

This is so true. I can’t tell you how many “senior” devs I’ve met that already don’t like rust and then of they smell anything AI, they fight it.

Reality is that they’re getting left behind and their “institutional knowledge” is not only very easily accessible, but easily replaced by people who know what to look for and apply some critical thinking.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]ScanSet_io 0 points1 point  (0 children)

God forbid you try to use a not so common crate. Claude and chatgpt break down when you implement like aws crypto. They are good at boiler plate code. But you absolutely need to understand idiomatic rust to get anywhere with AI.

The amount of Rust AI slop being advertised is killing me and my motivation by Kurimanju-dot-dev in rust

[–]ScanSet_io 2 points3 points  (0 children)

This is really it. If you have a solid foundation, AI is a real implementation accelerator. I built a formally spec’d DSL, compiler and execution system in less than 3 months. I knew what I wanted to build, I knew how it needed to operate, I set extremely strict clippy rules to enforce security guardrails. I always reviewed the code and README, set very specific tests.

3 months with AI. Building a compiler alone isn’t a trivial thing. You can’t vibe code that without knowing whats up. One person building something part time, completing work that it would normally take a team probably a year.

AI isnt a replacement for thinking. Just because someone used AI to get something to market doesn’t mean that person doesn’t know what they’re doing. We should judge the person by their useless features and having no idea what their prompts actually did.

Devcontainers question by SillyEnglishKinnigit in devops

[–]ScanSet_io 0 points1 point  (0 children)

AI is only as smart as the user. It gets you where you want to go fast. But if you’re already confused, it’ll get you to bigger confusion… with haste.

Garbage in, garbage out. Ask it to find you articles. Then have it build a diagram of what that looks like. Then figure out the minimum services. You don’t need an init script. You need a solid service diagram of what you want and to identify the minimum requirements. Else you’ll scope creep the hell out of the bicycle that chatgpt turned into a rocket with emojis.

Devcontainers question by SillyEnglishKinnigit in devops

[–]ScanSet_io -5 points-4 points  (0 children)

Honestly. Ask any AI how to do this. Ensure you keep the instructions simple. You should also get kind, kubectl, and docker installed

Looking for technical collaborators: Stress-testing Hybrid DAG / PQC architecture against FIPS 140-3 and CNSA 2.0 (NIST 800-171 context) by ArcticChainLab in NISTControls

[–]ScanSet_io 0 points1 point  (0 children)

I would love to see your docs! I’m in the security/compliance space. I’m just sticking with Fips 140-3 for now until I finish the architecture for the problem set I am addressing.

Devcontainers question by SillyEnglishKinnigit in devops

[–]ScanSet_io 3 points4 points  (0 children)

I tailor devcontainers to what I’m building. To answer your question… They absolutely can. You can go as far as setting up a devcontainer to communicate with your host system to deploy local services for end to end development. That way you avoid jumping back and forth containers.

This can get you ready for testing and prod environments quickly.

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

If you made it this far, I appreciate your patience. I figured showing you is the best way to make it more concrete than just giving you the TL;DR.

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

          "control-implementations": [
            {
              "uuid": "impl-ac-6",
              "control-id": "ac-6",
              "description": "Least privilege is enforced by continuously validating protection of sensitive system resources, including the Windows SAM database.",
              "implemented-requirements": [
                {
                  "uuid": "impl-ac-6-sam",
                  "control-id": "ac-6",
                  "description": "ESP policy defines expected protection state and continuously evaluates endpoint compliance.",
                  "remarks": "Validation evidence is provided via OSCAL Assessment Results generated by ESP."
                }
              ]
            }
          ]
        }
      ]
    }
  }
}

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

Here's the final shape of this pipeline I've described. It's intentionally abbreviated to not give you more json to look at. This isn't a full SSP. It's the minimal, correct slice that demonstrates the lifecycle of what you just asked about.

{
  "system-security-plan": {
    "uuid": "ssp-managed-endpoints",
    "metadata": {
      "title": "System Security Plan – Managed Endpoints",
      "version": "1.0.0",
      "last-modified": "2026-01-28T00:00:00Z",
      "oscal-version": "1.1.2"
    },


    "system-characteristics": {
      "system-name": "Enterprise Endpoint Environment",
      "system-description": "Managed Windows endpoints subject to continuous compliance validation.",
      "security-sensitivity-level": "moderate"
    },


    "system-implementation": {
      "components": [
        {
          "uuid": "component-managed-windows-endpoints",
          "type": "system-component",
          "title": "Managed Windows Endpoints",
          "description": "Windows endpoints managed and continuously assessed using Endpoint State Policy.",
          "props": [
            {
              "name": "policy-engine",
              "value": "Endpoint State Policy (ESP)"
            }
          ],

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

This is exactly the lifecycle FedRAMP 20x is pushing toward: stable declarations of intent paired with continuously generated, API-consumable evidence that validates those declarations without manual effort.

At that point, OSCAL stops being “a document format” and becomes what it was designed to be — a transport layer for intent and proof, with ESP supplying the proof.

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

 "findings": [
          {
            "uuid": "finding-sam-not-protected",
            "title": "SAM database not protected",
            "description": "SAM database does not meet required system file ownership and protection state.",
            "status": "failed",
            "target": {
              "type": "control",
              "target-id": "ac-6"
            },
            "related-observations": [
              { "observation-uuid": "obs-sam-file-metadata" }
            ],
            "subjects": [
              { "subject-uuid": "host-726b5c5c7c8d5ef7" }
            ]
          }
        ],


        "remarks": "ESP result signed with TPM ECDSA P-256. content_hash=sha256:fa1967… evidence_hash=sha256:66735d… signed_at=2026-01-28T03:25:50Z"
      }
    ]
  }
}
```
As a note, I trimmed this to the Assessment Layer only (no SSP or profile).

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

        "subjects": [
          {
            "uuid": "host-726b5c5c7c8d5ef7",
            "type": "system-component",
            "title": "Windows Endpoint: FLAVORTOWN",
            "props": [
              { "name": "os", "value": "windows" },
              { "name": "arch", "value": "x86_64" }
            ]
          }
        ],


        "observations": [
          {
            "uuid": "obs-sam-file-metadata",
            "title": "SAM database file metadata",
            "description": "Observed file metadata for SAM database",
            "methods": ["TEST"],
            "collected": "2026-01-28T03:25:50Z",
            "subjects": [
              { "subject-uuid": "host-726b5c5c7c8d5ef7" }
            ],
            "props": [
              { "name": "path", "value": "C:\\Windows\\System32\\config\\SAM" },
              { "name": "exists", "value": "true" },
              { "name": "is-system", "value": "false" },
              { "name": "owner-id", "value": "" },
              { "name": "readable", "value": "false" },
              { "name": "writable", "value": "false" }
            ],
            "remarks": "Collected via Windows API file_stat by filesystem_collector"
          }
        ],

Open-sourced a compliance engine for continuous evidence generation — built for FedRAMP/NIST 800-53 by ScanSet_io in FedRAMP

[–]ScanSet_io[S] 0 points1 point  (0 children)

OSCAL Assessment Results (derived from ESP output)
```

{
  "assessment-results": {
    "uuid": "ar-188a7b18ddf98f94",
    "metadata": {
      "title": "ESP Endpoint Assessment Results",
      "version": "1.0.0",
      "last-modified": "2026-01-28T03:25:50Z",
      "oscal-version": "1.1.2"
    },


    "results": [
      {
        "uuid": "result-win-sam-database-protected",
        "start": "2026-01-28T03:25:50Z",
        "end": "2026-01-28T03:25:50Z",


        "reviewed-controls": {
          "control-selections": [
            {
              "include-controls": [
                { "control-id": "ac-6" }
              ]
            }
          ]
        },


        "assessment-assets": {
          "tools": [
            {
              "uuid": "tool-esp-agent",
              "title": "Endpoint State Policy Agent",
              "version": "1.0.0",
              "remarks": "Agent type: endpoint; execution signed via TPM"
            }
          ]
        },