CMMC News by Jun Cyber

Navigating New DOD ODP Mandates in NIST SP 800-171 Revision 3

• Wilson Bautista Jr.

Send us a text

🚨 Working with the Department of Defense or handling Controlled Unclassified Information (CUI)? Here’s what you need to know about the DOD’s new approach to NIST SP 800-171 Revision 3 ODP values.

Just listened to the latest episode of CMMC News, where the hosts did a deep dive into the recent DOD memo standardizing “Organization Defined Parameters” (ODPs) for protecting CUI. If you’re a defense contractor—or work in the DIB—these aren’t just guidelines, they are your new minimums.

🔑 3 Key Takeaways:

  • No More Guesswork: The DOD has filled in the “blanks” of NIST 800-171 R3 by setting specific ODP values. These are now the baseline for all contractors—think max inactivity timeouts, access control reviews, and patching deadlines.
  • Timelines Are Tight: Some key numbers to know:
    • Account inactivity? Disable within 90 days.
    • Privileged session logoff? Required at end of work period.
    • High-risk vulnerability patching? 30 days max.
    • Quarterly updates for password “bad lists” and system inventories.
  • Documentation & Continuous Vigilance: Annual (or more frequent) reviews for policies, logs, training, and agreements are now required. Plus, always justify and document any deviations or risk-based modifications—the DOD wants your decisions traceable.

The big picture: The DOD is taking out ambiguity. If you handle CUI, you must implement these specific controls—or document strong justification for any flexibility allowed. And these requirements will change as threats evolve, so keep your risk assessments and compliance efforts agile.

Want the full detail? Highly recommend listening to the episode and reviewing both the NIST SP 800-171 R3 standard and the new DOD ODP memo. Stay compliant, stay secure! đź’Ş

See the original PDF here: https://drive.google.com/file/d/1rtgUmlaCiUKst-mHR7Fsz5O95g46hCra/view

#cybersecurity #DoD #NIST #CUI #compliance #riskmanagement #defenseindustry

Support the show

Welcome to the Deep Dive. Today, we're digging into something, really critical for anyone working with the Department of Defense. Mhmm. We're looking at a recent policy doc about protecting sensitive info, what they call controlled unclassified information or CUI. That's right. You can think of it as the DOD setting some, let's say, specific rules of the road. We're focusing today on a memo they issued about the latest version of these key security standards, NIST special publication eight hundred one seven one revision three. Right. NIST SP eight hundred one seventy one r three. And it brings in this idea of organization defined parameters or ODPs. Yeah. So the basic concept is these guidelines have some some blanks to fill in. Yeah. Exactly. Fill in the blanks. And organizations can sort of tailor those based on their own risk picture. Precisely. These ODPs acknowledge that security isn't, you know, one size fits all. A smaller contractor might have very different risks compared to a huge defense firm. So these parameters, they allow for some flexibility. Okay. But Yeah. And this is the key part. The DOD is stepping in now and making it very clear for its contractors. Mhmm. This memo actually spells out the DOD's specific answers for those blanks, their values for the ODPs. Yeah. So it's not just about knowing the NIST standard anymore. You have to know how the DOD expects you to apply it. These are minimums they're setting. And how did they come up with these values? It wasn't just one office. Right? No. Not at all. It was actually quite collaborative, which is interesting. They got input from different DOD parts, other government agencies, the big research centers, URTs, and FFRDCs, and even industry folks weighed in. It sounds like they really tried to get buy in and cover the basis. And you noted sometimes it's not a hard number, but more like guidance. That's a really key point. Mostly, yeah, they've set firm minimums. But for a few ODPs, they offer guidance. That suggests a bit more room for a risk based approach there. But you still need to justify your choices in those guidance areas, I imagine. Absolutely. You'll need solid reasoning documented. Okay. So our mission today is to really unpack these key ODPs the DOD has defined. We wanna give you, our listeners, a clear understanding of what they are and, you know, what they actually mean if you work with the DOD. Sounds good. Let's maybe start with access control. How is the DOD handling system account management? Section three point one point one, I think. Right. Three point one point one system account management. The DOD gets pretty specific with timelines here. For instance, if an account hasn't been used, it has to be disabled. How long? Within ninety days maximum. Ninety days. Okay. That makes sense. Don't want old dormant accounts just hanging around Yeah. Potential security hole. Exactly. And they're also strict about notifications. Notifications. Like, when someone leaves. Yeah. Exactly. Within twenty four hours, account managers need to know if an account's not needed anymore, if someone leaves or transfers, or even if their system usage or need to know changes. Twenty four hours. That's quick. It has to be to shut down access fast. What about logging out? Is there a rule for that? Yes. There is. Users need to be logged out after, at most, twenty four hours of inactivity. Twenty four hours seems kinda long, actually. It does, but there's a catch for privileged users. Okay. The admins and such. Right. For them, it's stricter. They have to log out at a minimum when their work period ends. No leaving privileged sessions open overnight. Adds an extra layer for those powerful accounts. Mhmm. Okay. Let's move to three point one point five, system access authorization. This sounds like the principle of least privilege. That's the core idea. Yes. Only the access needed for the job. And the DOD specifies service security functions that definitely require formal authorization. Like, what kind of functions? Well, things like setting up accounts, assigning privileges, configuring access controls themselves, setting up audit logging, defining vulnerability scanning parameters, intrusion detection settings, and managing the audit info. Basically, the keys to the kingdom security wise. Yeah. You definitely want control over who can touch those settings. And it's not just actions, but also certain types of information require authorization too. Correct. The DOD lists specific types of security relevant info. Think threat and vulnerability data, firewall rules, security service configurations, crypto queue management info. So the sensitive stuff that makes security work. Exactly. Also, the security architecture itself, access control lists, and, again, audit information. Very sensitive stuff. Got it. And these privileges, they don't just get assigned once and forgotten. Right? There's a review process. There is. DOD mandates reviewing the privileges assigned to roles or user groups at least every twelve months. Annually. Yep. A regular checkup to prevent privilege creep and make sure access is still necessary. Okay. Let's zoom in on privileged accounts themselves. Section three point one point six. What are the main rules there? The big one is restriction. Privilege accounts are only for defined authorized people or specific admin roles. Reinforces least privilege for the most powerful accounts. And how should these accounts be used day to day? Is there guidance on that? Yes. Very clear guidance. Privileged users must use their regular nonprivileged accounts for everyday tasks, email, web browsing, general work. So don't use the admin account unless you're actually doing admin tasks. Precisely. It limits the exposure if that powerful account gets compromised somehow. Makes sense. Okay. Section three point one point eight, invalid logon attempts. We've all fat fingered a password. How many tries do we get? The DOD sets the limit at five. No more than five consecutive invalid attempts within a five minute window. Five and five minutes. Standard practice, pretty much. And if you hit that limit Automatic action. The system has to either lock the account or maybe just the device you're using for at least fifteen minutes Fifteen minutes. Or longer. Or until an admin manually unlocks it. And either way, the admin needs to be notified. Right. Deter's brute force guessing, but doesn't totally lock you out for typos. And what about device lock? Section 3.1, stepping away from the keyboard. Two parts here. First, the system must auto lock after a maximum of fifteen minutes of inactivity. That's the safety net. Fifteen minutes. And the second part? Personal responsibility. Users must manually lock their device before leaving it unattended, period. Doesn't matter if the timer's about to kick in. So auto lock and manual lock, both required. Both. And when it's locked, it has to stay locked until you reauthenticate. Plus, whatever was on the screen needs to be hidden. No sensitive info left visible on a locked screen. Good point. Okay. What about ending a whole session, not just locking? Section three point one point one one session termination, when does that happen automatically? DOD defines a few conditions. That twenty four hour inactivity limit we mentioned is one. Okay. Also, if the system detects misbehavior like someone trying to violate security policy. Interesting. And finally, for system maintenance reasons, like if updates need to be applied or there's a system issue. So inactivity, bad behavior, or system needs. Got it. Let's shift gears a bit. Use of external systems three point one point two zero. This feels very relevant today. Hugely relevant. The DOD baseline is clear. No unauthorized external systems, period. You need specific permission. And if a system is authorized Yeah. Are there rules? Oh, yes. Organizations have to set up specific terms and conditions for using any authorized external system. These rules need to cover, at minimum, what kinds of organizational apps can be accessed from outside and what's the highest CUI level allowed on that external system. So you need to know exactly what data can go where and how. What if you can't agree on terms with the external system owner? Well, the DOD says you might just have to restrict your own people from using that external system. Yeah. Can't guarantee security. Don't use it. Makes sense. Any other restrictions? Yes. A big one. No using organization controlled portable storage like USB drives on external systems. Too risky for data transfer. And they mentioned NIST SP 847 for guidance on setting up secure info exchanges if needed. Okay. That covers access control pretty thoroughly. Let's move to awareness and training, sections three point two point one and three point two point two. How often does basic security literacy training need to happen? At least annually, every twelve months for everyone. Everyone. Everyone. Plus, when they first join and also whenever there's a significant incident or a big change in the risk environment. And the training material itself needs updating too. Yep. Same frequency. Update the content at least annually and after major incidents or risk changes. Keep it fresh. Keep it relevant. Okay. What about role based training tailored for specific jobs? Very similar frequency, at least every 12, and also triggered by system changes or significant incidence risks. And when does this role based training need to happen? Crucially, before someone gets access to the system or CUI or starts doing their duties, you need the knowledge first. Train first, access later. Makes sense. And the content updates. Same deal. Update the role based content at least every twelve months and after incidents or risk changes. Got it. Let's talk audit and accountability. Logging events three point three point one. What kind of things must be logged according to the DOD? It's a pretty long list as you'd expect. Minimums include authentication events, logins, log offs, actions on key file subjects, create, access, delete, modify permission changes. So who touched what when? Right. Also, data exports, downloads, imports, uploads, user and group management ads, deletes, changes, disables, locks. Use of privilege rights is big policy changes, config changes, admin access, trying to get more privileges. Wow. That is detailed. Anything else? Oh, yeah. Accessing the audit logs themselves, system reboots, restarts, shutdowns, printing, even application startups. They want a very clear trail of activity. They also point to OMB Gardens for more detail. A comprehensive picture. And you don't just set up logging and walk away, do you? Definitely not. You have to review and update what you're logging at least every twelve months. And after significant incidents or risk changes, make sure you're still capturing the right stuff. What if the logging system itself fails? Section three point three point four. What's the procedure? Immediate alert. Personnel need to be notified in near real time as soon as a failure is discovered. Can't have gaps in the logs. And besides the alert There are follow-up actions. You have to document the failure and the fix, troubleshoot it, obviously repair or restart it, and report it as an incident if it warrants that. So treat a logging failure seriously. Now you've got all these logs. What about reviewing them? Section three point three point five. Regular review is mandatory. DOD requires review and analysis of audit records at least weekly. Weekly. Weekly. Looking for any inappropriate or unusual activity, it's about proactive threat hunting in the logs. It makes sense for sensitive data. Okay. Last bit on auditing, time stamps. Mhmm. Section three point three point seven. How precise do they need to be? Granularity needs to be one second or better, and they require using UTC time or a documented fixed local offset from UTC that's included in the time stamp. Need that precision for correlating events across systems. Okay. Moving on to configuration management. Section three point four point one talks about baseline configurations. What does that mean? It means defining, documenting, and controlling a standard known good secure configuration for your systems. It's your starting point, your reference. And this baseline isn't static, is it? How often review? Review and update at least every twelve months. Also, update it whenever system components are installed or modified and after significant incidents or changes. Keep it current. Makes sense. What about specific configuration settings? Three point four point two mentions restrictive settings. Yeah. The idea is to lock things down, establish document and implement settings that are as restrictive as possible while still letting the system do its job, least functionality, configuration. And they give pointers on where to find good configurations. They do. They point to the NIST National Checklist Program, the NCP, use those common secure configs. They also stress preventing remote devices from being connected to, say, your network and an untrusted network simultaneously. Right. Avoid bridging networks. Exactly. And if you have to deviate from standards, you must document it, justify it, and get it approved. Record keeping is key. Okay. Three point four point six, overall system configuration, mission essential capabilities only. Reduce the attack surface, configure systems only with the functions absolutely needed for their mission, get rid of the fluff. How do they suggest doing that? Guidance includes limiting components to a single function where possible, removing unused software, disabling unnecessary ports and protocols both physical and logical. They recommend using network scanning, IDSPS, endpoint protection to help enforce this, And, really, building systems with minimal functions should be part of the design process. And checking for this unnecessary stuff. How often? Review at least every twelve months whenever you change function sports protocol services. And after significant incidents or risk changes, constant vigilance. Software execution three point four point eight. This sounds like white listing. Deny all, allow by exception. That's exactly it. By default, nothing runs. Only software explicitly authorized on a list is allowed to execute. Very effective against malware. And that allowed list needs upkeep. Yes. Review and update the authorized software list at least quarterly, every three months. Quarterly. Okay. Keeping an inventory of system components, three point four point one zero, seems fundamental. It is. You need to develop and document an inventory of all your system components, hardware, software, know what you need to protect. Update frequency for the inventory. At least quarterly. And, also, whenever components are installed, removed, or updated, keep that list accurate. Now a specific scenario. Systems used in high risk locations, section three point four point one two. What are the DOD rules? Very strict. Systems issued for high risk travel must have no CUI or FCI stored on them beforehand. None at all. None. And they should be configured to prevent processing, storing, or transmitting CUI FCI unless you get a specific written exception from the contracting officer. That's tough. What happens when the system comes back? Two steps. First, examine it carefully for physical tampering. Then you either have to completely wipe and reimage the storage or Wow. Physically destroy the system. Wow. Okay. Serious measures for high risk travel. Absolutely. Can't risk compromise. Let's switch to identification and authentication. Yeah. Section three point five point one. Uniquely identifying and authenticating users is basic. But when does DOD require reauthentication? Several triggers. Changes in user roles, changes in their authenticators, like getting a new token, or changes in their permissions. Okay. Something about their access changes. Right. Also, if the security level of the system they're accessing changes, whenever they perform a privileged function and after their session terminates. Basically, key transition points. Got it. What about identifying devices? Three point five point two. Also required. Uniquely identify all devices before they connect. And where feasible, authenticate the device too. If authentication isn't feasible, you have to document why. Okay. Identifier management, three point five point five. Reusing old usernames. Big no no. DOD prohibits reusing an identifier for at least ten years after it's deactivated. Ten years. Okay. Avoids confusion and maintains accountability. And managing identifiers involves looking at user characteristics. Yes. Things like, is it a privileged or non privileged user? Are they an employee, contractor, foreign national, someone external? These factors help tailor controls. Password management, three point five point seven. We all know about complexity, but what about that list of compromised passwords? Need to maintain a bad password list common guest known compromised ones updated at least quarterly and whenever you suspect a compromise. Quarterly updates for the blacklist and password creation rules. Minimum length, 16 characters. 16. That's strong. Yep. And passwords can't contain the username or the user's full name. Good rules. What about managing authenticators themselves? Tokens, cards. Sections 3.5, point one two, refresh rates. Depends on the type. Passwords protected by MFA, no required refresh, hard tokens, badges, refresh at least every five years. Five years for tokens. Okay. Other authenticator types, at least every three years. And any authenticator needs refreshing after an incident or suspected compromise or loss. Got it. Moving to incident response. Mhmm. Section three point six point two, reporting incidents. How fast? Near real time. Report suspected incidents to the organization IR capability basically as soon as practical after discovery. Speed is critical. And reporting goes beyond the internal team. Yes. Report to everyone required by the contract and the incident response plan. Could be DOD components, other agencies, depends on the situation. Testing the response capability, three point six point three. How often? Test the IR capability at least every twelve months. Practice makes perfect or at least better. Annual testing OASIS. And incident response training, three point six point four. Time lines. Initial training, within ten days for privileged users, thirty days for everyone else. Quick onboarding for training. Then recurring training at least every twelve months, plus refreshers after significant incidents or risk changes, and update the training content itself annually or after incident changes too. Okay. Media protection, three point eight point seven. Removable media like USB drives. Big restrictions. Restrict or prohibit any removable media not managed by the organization. Control what gets plugged in. And what about drives found lying around? Absolutely forbidden if they don't have an identifiable owner. Don't plug in random USB sticks. Good advice anytime. Personnel security, three point nine point one. Screening is standard, but what about rescreening? DOD says rescreening is needed if organizational policy requires it after a significant incident or if there's a change in someone's status that might affect their trustworthiness or access needs. So triggered by events or policy, what about when someone leaves or transfers? Section three point nine point two. Termination actions are specific. Disable system access within four hours. Four hours. Revoke their authenticators credentials, get back any security related property they have. If they transfer roles internally, review their access rights immediately, and adjust as needed for the new role. Type process. Okay. Physical protection section 3.111. Facility access lists. Review frequency. Review the list of who's authorized for physical access at least every twelve months or after significant incidents or risk changes. Keep it current. And monitoring physical access, three point one zero point two. Reviewing logs. Monitor for physical incidents and review the physical access logs at least every forty five days and after significant incidents risk changes. Forty five days for physical logs. Got it. Alternate worksites, three point one one point six, working from home or elsewhere. Need adequate security measures there too. Comparable to the main site where practical, this has to be documented in policy and covered in training. Consistent security wherever the work happens. Risk assessment, three point one one point one. How often update? Assess risk, including supply chain risk related to CUI disclosure, and update that assessment at least every twelve months or after significant incidents or risk changes. Annual risk assessment update. What about vulnerability management? Three point one one point two. Scan frequency. Scan for vulnerabilities at least monthly. Also scan when new vulnerabilities affecting the system are identified. Monthly scans and fixing found vulnerabilities, high lines. Based on risk, high risk vulns fix within thirty days of discovery. Moderate, ninety days. Low, a hundred and eighty days. Thirty, ninety, hundred and eighty. Clear deadlines. And the vulnerability scan scan data itself needs to be updated no more than twenty four hours before running the scan. Use fresh data. Right. Security assessment and monitoring three point one two point one. Assessing security requirements. Assess if you're meeting requirements at least every twelve months or after significant incidents or risk changes. Annual checkup. And when sharing CUI, three point one two point five. Agreements needed. Yes. Need agreements as described in the contract could be ISA's, ISA's, MOUs, and review and update these exchange agreements at least annually. Okay. System and communications protection. Section three point one three point nine, terminating network connections due to inactivity. Max, fifteen minutes of inactivity before the network connection associated with the session is terminated, much shorter than the session time out. Fifteen minutes for network connection. Got it. Cryptographic key management, 3.130.1. This one's guidance. Establish policy and procedures following the latest cryptographic key management best practices. Stay current. And when actually using crypto for CUI confidentiality, 3.130.11. What type? FPS, validated cryptography. Mandatory, government tested, and approved. FIPS validated. Okay. Remote activation of things like webcams or mics, 3.130.12. Generally prohibited. Exceptions are very rare. Must be listed and justified in advance in the SSP only if no other option exists. Operationally critical. Sounds like a high bar. It is. And even then, you need a clear visible indication to anyone physically present that it's being activated remotely. Transparency is key there. System information integrity. Flaw remediation, three point one four point one. Patching deadlines. Similar to vulnerability remediation, but based on patch release date. Critical high risk flaws, patch within thirty days of release, moderate. Ninety days, low. A hundred eighty days. Three zero nine zero one eighty from patch release. Got it. Malicious code protection, three point one four point two, scan frequency. Scan systems for malware at least weekly, plus real time scanning of files from external sources as they enter leave are downloaded, opened, or executed. Weekly system scans plus real time protection. Okay. Planning section. Policies and procedures, three point one five point one. Review frequency. Review and update at least every twelve months or after significant incidents or risk changes. Annual review. System security plan, the SSP three point one five point two. Same frequency. Review and update the SSP at least annually or after incident changes. Keep it aligned with reality. And rules of behavior for users, three point one five point three. Also review and update at least annually or after incident changes. Seems like annually or after significant events is a common theme for documentation. Very common. Keeps things from getting stale. System and services acquisition, section three point one six point one, system security engineering. Any key guidance? Guidance emphasizes having documentation for users and admins on implementing and using controls. Detail should match how much you rely on those controls, include required settings, acceptance criteria for new stuff. And using external services like cloud providers, Three point one six point three. Requirements. Cloud providers need FedRAMP moderate authorization or an equivalent. FedRAMP moderate. Other external service providers. They need to meet NIST SP eight hundred one seventy one revision two. Interesting. Rev two for other external providers. Okay. Last area, supply chain risk management. The SCRM plan, three point one seven point one. Review frequency. You guessed it. Review and update the SCRM plan at least every twelve months or after significant incidents or risk changes. And minimum SCRM requirements, three point one seven point three. At minimum, integrate SCRM into acquisition policies, provide resources for it, define baseline security for suppliers, and have processes for suppliers to report major vulnerabilities incidents they experience. You need visibility into your supply chain. Okay. Wow. That was a lot of detail. Yeah. So wrapping this up, this deep dive really shows the DOD isn't leaving much ambiguity, is it? They've laid out very specific parameters for NIST eight hundred one seventy one r three. That's the key takeaway. They took those customizable ODPs in r three and essentially said, for our contractors, here are the mandatory minimum settings. It sets a clear, consistent baseline for protecting CUI across the board. Less guesswork, more clear expectations. Exactly. Think of it as the DOD standardizing the security posture it requires. And it's crucial to remember this isn't static. The memo itself says these ODP values will be updated. So organizations need to stay vigilant. Absolutely. This is an ongoing process. Threats evolve, so the requirements will too. Continuous monitoring and adaptation are essential. If you really want the full context, definitely read the NIST SP eight hundred one seventy one r three standard and the DOD memo itself. Good advice. So considering just how granular these requirements are, those timelines, thirty days for high risk patching, fifteen minute inactivity timeouts, what do you think the biggest challenges will be for organizations actually implementing all this? That's a great question. And maybe what kind of innovative solutions might pop up to help them cope? It's definitely a lot for organizations to digest and put into practice, something important to think about. Thanks for joining us for this deep drive.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Dev.Sec.Lead Artwork

Dev.Sec.Lead

Wilson Bautista Jr.