DCC Scoping Mistakes That Fail Certification (and How to Avoid Them)
Per the IASME Scoping Guide: "Failure to adequately and accurately define the scope (e.g. under scoping) will result in a failure to achieve certification, even if all required controls have been met." Most DCC failures do not come from missing controls - they come from misjudged scope. Six recurring scoping mistakes Fig sees at scoping conversations, what each costs in time and fees, and how to avoid each.
DCC Scoping Mistakes That Fail Certification (and How to Avoid Them)
Per the IASME Scoping Guide: "Failure to adequately and accurately define the scope (e.g. under scoping) will result in a failure to achieve certification, even if all required controls have been met." Most Defence Cyber Certification failures do not come from missing controls - they come from misjudged scope. This post covers the six most common scoping mistakes Fig sees at scoping conversations, what each costs in time and fees, and how to avoid each. The principle: scoping is the most critical first step. Treat it as load-bearing, not paperwork.
DCC is built on Defence Standard (Def Stan) 05-138 issue 4, and the standard scope is intentionally broad. Per clause 1.1 of the standard, the scope is the supplier overarching corporate or enterprise environment - all systems, processes, procedures, and data necessary for the effective protection of the data and functions in scope of the MOD contract. This goes beyond protecting only the information provided to the supplier in support of the contracted output. Per the IASME Scoping Guide, scope is not just about the data held: if the processes and systems are essential for the organisation to operate as a business in support of the contract, they must be within the DCC scope.
That breadth is where most scoping errors happen. What follows is the practical pattern of mistakes that fail certification.
Mistake 1 - Scoping too broad
The trap is "we will include everything to be safe." Suppliers default to including every site, every system, every supplier, on the theory that more is safer than less.
Why it fails. Every site, system, and supplier you include adds evidence work. A five-site organisation that scopes all five sites when only two are MOD-contract-relevant pays for five times the evidence collection. The assessor reviews every artefact you submit. Out-of-scope assets that get pulled in still need governance evidence, identity evidence, device evidence, supply-chain evidence.
Real-world signal. The engagement runs two to three times the timeline of equivalent peers because the assessor is reviewing evidence that is not load-bearing for the contract.
How to avoid it. Agree in-scope assets at scoping based on the contract requirements, not "everything we own." Out-of-scope is a valid answer for systems with no MOD-contract data flow. The assessor expects to see a list of what is in scope and a list of what is out of scope, with diagrams showing how the scopes overlap and a list of sites with their functions.
Mistake 2 - Scoping too narrow
The opposite trap: "we will exclude everything we can to keep it cheap."
Why it fails. Under-scoping is the failure mode the IASME Scoping Guide warns about explicitly. If processes and systems essential to the organisation operations in support of the contract are excluded from scope, the assessor cannot recommend certification - even if the in-scope assets meet every control. The result is a failed assessment, regardless of how well the included evidence holds up.
Real-world signal. The assessor pushes back at evidence review, demands additional systems be added to scope mid-engagement, the timeline extends, and remediation rounds increase. In the worst cases, the assessment is paused while scope is re-defined - effectively a re-engagement.
How to avoid it. Include everything that is essential for the organisation to operate as a business in support of the MOD-contract Functions and Data. Per Def Stan 05-138 i4 clause 1.1, the scope is the Supplier overarching corporate or enterprise environment. The conservative test: if removing a system or service would impair the supplier ability to deliver the contracted output, that system is in scope.
Mistake 3 - Assuming Cyber Essentials evidence auto-maps to Level 1
The trap: "we have Cyber Essentials, the L1 evidence pack is mostly done."
Why it fails. Cyber Essentials covers thirteen NCSC technical controls across firewalls, secure configuration, user access, malware protection, and security update management. DCC Level 1 covers 101 controls, including governance maturity, supply-chain flow-down, secure-configuration evidence depth, identity-control specifics, and incident response - all of which CE does not reach. CE is a prerequisite, not a partial L1 evidence pack.
Real-world signal. Sixty to eighty per cent of governance, identity, and supply-chain controls return as gaps in the first round of remediation. The supplier is genuinely surprised because they assumed CE coverage was broader than it is.
How to avoid it. Treat CE as the floor, not the answer. Plan a separate evidence-collection sprint for the additional 88 L1 controls beyond CE. Do not rely on "CE-aligned" claims as a shortcut to L1 evidence. The CE evidence you hold underpins parts of the L1 secure-configuration and identity controls; it does not replace the governance, supply-chain flow-down, or secure-configuration depth required at L1.
Mistake 4 - Underestimating supply-chain evidence depth
The trap: "we will just attach our supplier list."
Why it fails. L1 supply-chain controls require documented flow-down (security clauses in supplier contracts), assessment evidence in the form of Supplier Capability Assessments (SCAs), and visibility on whether suppliers handle in-scope MOD data. A supplier list alone does not satisfy any of these.
Real-world signal. The supply-chain section of the L1 evidence pack returns the most findings of any control family. Suppliers respond to SCA requests on their own timelines - some take two to three weeks to return a signed assessment - and the engagement stalls while the supply-chain evidence is collected.
How to avoid it. Send SCAs to direct suppliers in week one of the engagement, not week five. Allow two to three weeks for supplier responses. Map flow-down clauses against the MOD contract scope - if your supplier contract says nothing about cyber security, the assessor will treat that as a gap. Use a standard SCA template covering 15-20 controls relevant to the contract context, not a free-text questionnaire.
Mistake 5 - Leaving legacy systems in scope without a remediation plan
The trap: "we will include the Windows Server 2012 box, it is still in production."
Why it fails. Unsupported operating systems and end-of-life software in scope are high-severity findings. The assessor cannot recommend certification while a critical-severity gap is open. The remediation options are decommission, move out of scope (with a documented justification), or document compensating controls in detail (network segmentation, restricted access, monitored egress).
Real-world signal. Remediation round one surfaces it. Round two confirms the patching plan or the decommissioning plan. Round three closes documentation. The engagement extends four to six weeks beyond the typical band for legacy decommissioning - or longer if the legacy system is genuinely load-bearing and cannot be moved.
How to avoid it. Identify legacy systems at scoping. Either decommission before formal assessment opens, move them out of scope, or document compensating controls in detail before the assessor sees the evidence pack. The compensating-controls path requires substantive evidence - not a one-line note.
Mistake 6 - Compressing evidence collection into the formal assessment window
The trap: "we will start gathering evidence at week five when the assessment opens."
Why it fails. A Level 1 evidence pack covers 101 controls. Assembling that documentation takes weeks. Starting at the assessment window means evidence is still being created during formal review. Assessors flag fresh evidence with weak retention provenance - a policy authored last week is materially different from a policy that has been in use for six months. The result is multiplied clarification rounds and an extended timeline.
Real-world signal. Assessor queries multiply. Clarification rounds extend. The engagement timeline slips by four to six weeks beyond the typical 6-10 week band.
How to avoid it. Start evidence preparation at scoping, not at formal assessment. If you are using Fig, the platform gap analysis surfaces what is missing in week one - giving you a four-week head start on remediation before the assessor sees the pack. Aim to enter formal assessment with the evidence pack 90% complete, with the remaining 10% being clarifications the assessor specifically asks for.
The common root
The six mistakes share a single root cause: treating scoping as paperwork rather than as the load-bearing decision it is. Under the DCC scheme, scoping decisions made in the first week of an engagement determine the cost, timeline, and outcome of everything that follows.
Per the IASME Scoping Guide: "scope is not just about the data held. If the processes and systems are essential for the organisation to operate as a business, then it must be within the Defence Cyber Certification scope." Apply that test at scoping, document the result clearly, and most of the remediation cycles that fail engagements simply do not open.
What to do next
The fix is operationally simple: book scoping with an IASME-licensed Certification Body before any other DCC work begins. The scoping call is normally free of charge - it confirms the Cyber Risk Profile, the level, the scope boundaries, and the timeline before any engagement fee is incurred.
If you are at the start of a DCC engagement, book a 15-minute Fig scoping call and we will walk through scope decisions before you commit. For the level-detail context, see DCC Level 0 and DCC Level 1. For the timeline context, see how long does Defence Cyber Certification take?. For the broader DCC overview, the DCC hub is the right starting point.
About the author

Jay Hopkins
Managing Director, Fig Group
Jay Hopkins is the Managing Director of Fig Group and an IASME-licensed Cyber Essentials assessor. He was previously Head of Technology for a global regulated firm. He works with UK organisations across regulated sectors on baseline compliance, supply-chain assurance, and AI-augmented security tooling.
Next step
Want to see how Fig handles this?
Explore how Fig automates compliance mapping, evidence collection, and framework alignment across 65+ standards.
Request a demoMore from Compliance