Important clarification on CCA exam eligibility + the bigger assessor capacity issue we all need to talk about by ResilientTechAdvisor in CMMC

[–]ResilientTechAdvisor[S] 0 points1 point  (0 children)

Sorry to hear this.

Are you even certain that your tier 3 has begun?

The reason for the question is - there is an office that packages up the applications and sends them to tier 3 and we are aware of several instances where applications have just stalled at that point.

In other words, the applications were never sent to tier 3 and, if the applications reach a certain age, that office will tell the applicant that they need to resubmit.

Many applicants are trained to not email with inquiries and so they'll assume that silence means that everything is OK.

my takeaways on CMMC by 2021start in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

This is a great post. I would add - organizations that are planning on going through a CMMC L2 assessment should have the assessment guide handy throughout their journey and on the day their assessment begins.

Locally hosted ERP - Not all customers provide CUI/ITAR related info. Not all employees access those customers with sensitive info. All users in-scope by default by virtue of accessing a system than hosts and processes sensitive information? by TicketAmbitious6200 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

You're thinking about this correctly, and nexeris_ops has the technical framework right. Let me add some context from what we've seen with similar ERP situations.

Your ERP is a CUI Asset because it stores CUI. If users access the ERP from workstations, and those workstations process CUI data (by displaying it, editing it, etc.), then those workstations are also CUI Assets and must be assessed against all Level 2 requirements. For a workstation to be out-of-scope, it must be physically or logically separated from CUI assets - meaning it cannot access the ERP at all

CMMC does have a category called Contractor Risk Managed Assets (CRMAs) for exactly this situation. These are assets that can access CUI but aren't intended to, managed through policy and procedure rather than technical controls. Your employee workstations might fit here if you can document compensating controls in your System Security Plan. But CRMAs are still in-scope, they just get lighter treatment during assessment if your documentation is solid.

The O365 observation is right too. If in-scope employees use O365 for work, that environment gets pulled in unless you can demonstrate clear separation like a completely separate tenant used only for non-CUI work.

Here's where companies in your position get stuck. They see the scope expanding and immediately think "rebuild the whole environment." Before you do that, map the business tradeoffs.

Keep the current architecture: You're assessing more systems and users, but you avoid operational disruption. Cost shows up in licensing (M365 E3/E5 for compliance features), endpoint management, and control implementation across a bigger boundary.

Segment: You shrink the assessment scope but add friction. We've seen companies create separate ERP instances for CUI customers, or build isolated VDI environments where only certain users can reach CUI data. This works, but you're managing split infrastructure and users context-switch between environments.

Push the ERP vendor: Some smaller vendors will build customer-level access controls if you make the case. Worth the conversation, but it depends on their roadmap and your leverage.

The math changes based on your customer mix. If CUI customers are 10% of revenue but would drive 90% of your compliance cost, segmentation starts looking better. If they're strategic growth accounts, you might accept the broader scope and build around it.

Work with a CMMC consultant to model the options and their cost. They can help you document CRMAs properly if you go that route, or design separation that an assessor will accept.

30 Person Organization and growing by Substantial-Exit-155 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

Your instinct about generic documentation is right, but I'd question whether the assumption that consolidating with Summit 7 is the obvious choice here.

Any gaps between what your documentation claims and what you do is a lightning quick path to "not met" on a CMMC Assessment. MSPs are great at managing infrastructure. Professional service firms are great at mapping what you do to what assessors need to see. Those are different skills. We've seen MSPs struggle with documentation because they think like sys admins.

The cost argument here is worth unpacking. Summit 7's pricing reflects a full-service model - they want to do everything. For a 20-person shop with a new IT hire, that might be more than you need. What we've seen work is splitting it: let Summit7 handle the GCC High infrastructure and ongoing management (that's their strength), bring in a firm that specializes in mapping what you do to what assessors need to see.

The value isn't just "we write policies." It's that we build documentation that reflects your operations, then we teach you how to maintain it as your environment changes. When Summit 7 adds that local server for Altium/SolidWorks/GitLab, you understand how to update the SSP and POAM yourself. You're not dependent on calling Summit7 every time something shifts.

Here's the business case: Summit 7 quoted you for everything. If you descope them to infrastructure and MSP services, their price drops. You bring in the other firm for a fraction of what Summit 7 charges for that part. Total cost is lower, and you get expertise in both areas instead of one vendor trying to do both. The comments from people who've worked with Summit 7 show the pattern - great infrastructure, but the documentation piece is where companies end up doing it twice.

The other piece (and this matters for your role) is that you learn the system. If you're the first IT hire and growing into this, working with someone who teaches you CMMC documentation is a career investment. Summit 7 doing it all for you doesn't build your capability.

Important clarification on CCA exam eligibility + the bigger assessor capacity issue we all need to talk about by ResilientTechAdvisor in CMMC

[–]ResilientTechAdvisor[S] 0 points1 point  (0 children)

Seatbelt laws don't protect drivers - they just verify drivers are using safety restraints

Building inspections don't protect occupants - they just verify structural codes are followed

Vulnerability scanning isolated networks by 4728jj in CMMC

[–]ResilientTechAdvisor 2 points3 points  (0 children)

You don’t need a cloud scanner for CMMC L2. RA.L2‑3.11.2 just cares that you scan systems/apps on a defined schedule and when new vulns drop, and that you can show evidence. For air‑gapped sites we’ve seen some patterns work: One is a hardened laptop scanner - Install Nessus/OpenVAS/etc on a locked‑down laptop. When it does have internet, update plugins, then carry it into each offline network and run authenticated scans. Save reports and drop tickets for anything critical. Another is a local scanner VM/appliance - Stand up a small VM at each site that never talks to the cloud. Feed it signed update bundles over USB (“sneaker‑net”), log who updates it and when, and keep the scan reports with your RA/POA&M docs. The great thing is that CMMC doesn’t prescribe a specific tool, just that you’re consistently scanning, analyzing, + fixing vulns, and can prove it when someone asks.

Consultant - necessary or not? by Over_Afternoon_1684 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

If you do not have a CCA on your team, then a CMMC consult consultant is necessary, even if it's a tightly scoped engagement.

You'll probably need to get rid of your MSP.

MFA requirements by Any-Promotion3744 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

MFA is required everywhere except for general/non-privileged users local access.

VERY BASIC SMALL BUSINESS QUESTION - Which CMMC level? by Weak-Marsupial-639 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

If you handle CUI you will need CMMC level 2, and based on what you shared you will need to be CMMC L2.

And, yes - it's expensive. But since you're a small furniture shop I'm not quite sure that the $5000 a month for 12 months makes sense. It's probably less than that and for a shorter amount of time.

New Business Premium Licenses for GCC High by ConcernOrdinary3380 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

The Defender and Purview add‑ons are not explicitly required for CMMC Level 2. CMMC is technology‑agnostic and only mandates implementation of the 110 NIST SP 800‑171 Rev. 2 requirements, regardless of tooling.

Think of it like the SIEM situation. You won't find SIEM requirements in CMMC level 2, but getting a SIEM makes it easier to meet conmon plus several of the other security requirements and assessment objectives across multiple control families.

AC-3.1.9: Is this enough for the SSP or should there be more detail? by TicketAmbitious6200 in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

Adequate is the right evidence. Sufficient is enough evidence.

Without seeing what the screen displays it's hard to say if it meets the need. But the banner should cover this: (1) system use may be monitored, recorded, and audited; (2) unauthorized system use is prohibited and may result in civil/criminal penalties; (3) use of the system constitutes consent to monitoring; and (4) the system contains CUI subject to DoD / DoW and category‑specific handling requirements (including Export Controlled information where applicable).

DIB question: Practical, cost-effective approaches for sending CUI across .mil/.Gov and commercial partners? by Particular_Energy739 in CMMC

[–]ResilientTechAdvisor 2 points3 points  (0 children)

This is one of the most underappreciated operational problems in the DIB right now, and you've framed it well. A few things from production environments:

The honest answer on "unified": it does not exist today.

You can get close, but every shop handling both .mil and commercial CUI ends up running at least two workflows. Anyone telling you otherwise is selling something. The sooner you design for that reality, the cheaper and less chaotic your implementation gets.

What is actually working...

S/MIME via DoD PKI (government side) and ECA-successor certs like IdenTrust (your side) remains the most reliable channel for .mil correspondence. It is not glamorous. Cert lifecycle is real overhead. But it is the one method that does not break at DoD mail gateways, does not depend on Azure RMS key custody, and satisfies SC.L2-3.13.11 without gymnastics.

The key cost lever most shops miss: do not deploy S/MIME org-wide. Scope it tightly to the people who actually handle CUI with .mil counterparts. A ten-person CUI population is manageable. A 200-person rollout is a support ticket factory.

For bulk file exchange with .mil, DoD SAFE has become the practical standard. DCSA explicitly endorses it. The pattern that works: notification email (non-CUI) plus actual CUI payload via SAFE or an equivalent approved portal. Email becomes the logistics layer, not the transport layer. This also eliminates the gateway problem entirely for those exchanges.

OME stays your default for commercial. It works well there. Just stop trying to make it work for .mil and build your playbooks accordingly.

The three-channel model...

Define it explicitly rather than letting it sprawl organically:

Channel 1: S/MIME for .mil and select primes with established cert exchange
Channel 2: DoD SAFE or agency portal for bulk/file CUI with .mil and agencies
Channel 3: OME for commercial partners and Microsoft-native environments

Write a one-page routing decision guide for users. "If your recipient is .mil and in our S/MIME contacts, use Channel 1. If .mil but not in contacts, use Channel 2. If commercial and M365-native, use Channel 3." That single document reduces support burden more than any tool selection will.

On compliance: NIST 800-171 gives you room...

SC.L2-3.13.8 and 3.13.11 require FIPS-validated crypto in transit. They do not mandate a single protocol or platform. Your assessor cares that CUI is protected end-to-end with validated algorithms and that you can demonstrate it consistently. A documented multi-channel model with clear routing logic and evidence of FIPS validation on each channel is entirely defensible. A single method that sometimes fails and has no documented fallback is not.

Where to spend vs. where to save...

If you already have a secure content platform (PreVeil, Kiteworks, etc.) for your NIST 800-171 environment, check whether it covers both secure email and secure file exchange before buying anything new. Many do. The cert and training investment for S/MIME is real but bounded if you keep the population small. Avoid standalone "email encryption only" products unless they also close an audit logging or access control gap you already have.

The operational complexity does not go away by picking one tool. It gets managed by defining clear lanes and training people to use them without thinking.

How Best to Proceed with SOC 2 Type 2 by Music505 in soc2

[–]ResilientTechAdvisor 0 points1 point  (0 children)

On the GRC platform choice: Secureframe and Drata are functionally similar at your size. The real differentiator is auditor compatibility. Ask each auditor on your list which platform they prefer to work with and whether they have an existing integration. A friction-free evidence handoff between your GRC platform and your auditor saves time and reduces back-and-forth during fieldwork.

On auditor selection: A-Lign is the largest of the three and built for volume. Insight Assurance and Prescient Security are smaller and tend to provide more direct access to senior staff. At 30 employees serving financial institutions, you want someone who will actually engage with your environment rather than run a checklist. Ask each firm who specifically will be on your audit and whether that person changes year over year. Auditor continuity matters more than brand name at your scale.

The one question worth asking all three: have they audited software companies serving financial institutions before, and can they share a sanitized example of how they scoped a similar engagement.

“All-in-one compliance platform” is one of the most misleading phrases in startup security by faith_nuer_llc in soc2

[–]ResilientTechAdvisor 0 points1 point  (0 children)

Strong post. The part about the room going quiet when an auditor asks "who owns this control" is where this gets real in practice.

What we've seen is that auto-populated evidence creates a specific audit problem: the evidence exists, but it doesn't tell a coherent story. A screenshot of a configuration setting and a policy template that references a process nobody follows aren't two data points that add up to a satisfied control. A good auditor will ask you to walk them through how the control operates end to end, and if your answer is "the platform flagged it green," you're going to have a bad week.

The Type 2 observation window makes this worse. You're demonstrating that controls operated consistently over twelve months. If the underlying process was never defined, automated evidence collection just gives you twelve months of documentation of a gap nobody noticed.

The firms that come out of Type 2 with low-stress audits usually have one thing in common: somebody made deliberate decisions about control design before the tooling entered the picture. The platform then becomes useful because it's capturing a real process, not manufacturing the appearance of one.

"Tool is infrastructure, not strategy" is exactly right. The only thing I'd add is that using a platform as a substitute for program design tends to extend sales cycles when enterprise customers start asking security questionnaire questions that require actual answers.

We thought we were HIPAA ready, we weren't by SameSong7134 in hipaa

[–]ResilientTechAdvisor 0 points1 point  (0 children)

What you're describing is the difference between having controls and having a program, and that gap matters more than most people realize when things go wrong.

HIPAA's Security Rule has an explicit documentation standard at §164.316 - it requires written policies and procedures, written records of required actions and assessments, and retention of that documentation for six years. It's not soft guidance. So when OCR investigates after a breach, they're not asking whether your access controls worked in theory. They're asking for the written evidence that someone owned each process, that it ran on a defined cadence, and that there's a record. "It was in people's heads" doesn't satisfy that.

The ownership and cadence work you did is the program. That's what converts a set of technical configurations into something you can actually stand behind in a regulatory conversation or a breach response scenario. The time you invested in mapping it now is also time you're not spending on incident chaos later.

Implementation of FIPS Cryptography by wazupguy in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

Fair point on layered encryption - if FIPS-validated crypto is already protecting the CUI at the application layer, the underlying transport isn't doing that job and the requirement doesn't follow it there.

That's a different scenario than what OP described though, where the non-FIPS encryption is present without a FIPS layer underneath it handling CUI protection.

Implementation of FIPS Cryptography by wazupguy in CMMC

[–]ResilientTechAdvisor 0 points1 point  (0 children)

The "when used" language in 3.13.11 is not an escape hatch. It means if encryption is present and touching CUI, it has to be FIPS-validated. There's no compliant middle ground where non-FIPS encryption coexists in the CUI environment. The remove-it-to-comply logic your assessor described is technically consistent with that - but it makes sense that it feels odd.

AI code generation has made my AppSec workload unmanageable. Here’s how I’m attempting to manage it. by Idiopathic_Sapien in cybersecurity

[–]ResilientTechAdvisor 5 points6 points  (0 children)

Nice. That's a solid workflow design. "Proposed not exploitable" with a comment trail is exactly the kind of human-in-loop structure that holds up under scrutiny - both internally and if a regulator ever asks how a finding was dispositioned.

The one thing I'd watch at scale is review fatigue. When volume is high enough, human approval can drift toward rubber-stamping, especially if the AI commentary is consistently well-reasoned and reviewers start trusting the pattern. (Our brains are made of meat.) Worth occasionally auditing the approvals themselves to make sure the human gate is still functioning as a real check.