Skip to content
FigTechnical Guides
Technical Guides

The 14-Day Patching Rule: What It Actually Says and How to Stay Compliant

Jay Hopkins
Last reviewed: 18 April 2026
14 min read
Share:

The 14-Day Patching Rule: What It Actually Says and How to Stay Compliant

When I review a failed Cyber Essentials submission, patching is nearly always in the top three reasons. Applicants read the 14-day rule, assume they are comfortably inside it because "Windows Update is on", and then discover during feedback that the rule covers a lot more than Windows, and the clock starts earlier than most people expect.

This guide is what I wish every applicant had read the day before they submitted. It explains what the NCSC actually requires under the current Cyber Essentials v3.3 question set, answers the questions that come up most often in the IASME portal clarifications, and shows you the evidence posture that passes an assessor review on the first pass.

What the 14-day rule actually says

The NCSC requirement, paraphrased in the language the questionnaire uses, is this: security updates classified by the vendor as "critical" or "high risk" — or, where no vendor classification exists, updates that address a vulnerability with a CVSS v3 base score of 7.0 or higher — must be applied within 14 days of release for every in-scope device.

Three things about that sentence catch people out.

First, it covers every in-scope device. Not just servers. Not just workstations. The scope includes laptops used by remote workers, office-based desktops, BYOD phones that access organisational email, routers and firewalls, network switches, printers that handle corporate documents, and any IoT device that touches the corporate network. If it is in your declared scope, it is inside the 14-day rule.

Second, the rule covers all software on those devices. Operating systems, applications, browsers, browser extensions, PDF readers, office productivity suites, runtimes like .NET and Java, and firmware. The question set specifically asks about third-party applications because assessors see repeated failures here.

Third, the rule covers critical and high severity only. Medium and low severity updates are not inside the 14-day window for Cyber Essentials purposes — although leaving them unpatched indefinitely creates other problems. The question set is narrowly interested in the high-risk category because that is where the exploitable exposure lives.

Does the 14-day timer start when the patch is released, or when my scanner finds it?

The clock starts on the date the vendor publishes the patch, not the date your vulnerability scanner flags it.

This is the single most common misunderstanding I see during feedback rounds. An applicant writes in the questionnaire that their scanner runs weekly and they patch "within 14 days of detection". The assessor then works back from the patch release date and the submission fails because the applicant gave themselves a second 14-day window on top of the one they were meant to be inside.

If a vendor ships a critical patch on the 1st of the month, your organisation needs the patch applied by the 15th — regardless of when you scanned, when your change control committee met, or whether the person who owns the patch cycle was on leave that week. The 14-day window is from release, full stop.

That has two practical implications. You need patch monitoring that tells you when vendors release, not just when scanners find. And you need a patch cadence quick enough to turn around critical updates within two weeks of vendor release, consistently, across every device class in your scope.

Which updates count toward the 14-day rule?

Not every update is inside the window. The question set asks specifically about critical and high-risk security updates. The way to map that to what vendors actually publish:

Vendor classificationInside 14-day rule?------Microsoft "Critical"YesMicrosoft "Important"Often yes — check the CVE CVSS score; 7.0+ means yesChrome "Critical" or "High" security updatesYesUbuntu "High" or "Critical" in security advisoriesYesVendor does not classify but CVE has CVSS 7.0+YesVendor "Medium" or "Low"Not for CE purposesFeature updates with no security bulletinNoWindows cumulative updates containing any critical CVEYes — treat the whole cumulative as inside the rule

When in doubt, use the CVSS v3 base score as the source of truth. NIST publishes these in the NVD. If any CVE addressed by the patch is 7.0 or higher, the whole patch is inside the 14-day rule.

How do I handle third-party apps like Chrome, Adobe, 7-Zip, and Zoom?

This is the second most common failure. Organisations have Windows Update running and assume that covers everything. It does not. Chrome, Firefox, Edge, Zoom, Teams, Slack, Adobe Reader, 7-Zip, Notepad++, TeamViewer, WinSCP, PuTTY — every one of them ships independent security updates, and every one of them is inside scope if it is installed on an in-scope device.

There are three workable positions an assessor will accept:

Position 1: Managed auto-update. The application auto-updates on launch, the organisation has not restricted this via group policy, and the applicant can demonstrate — with a screenshot of About dialogs or a script dump of installed versions — that the deployed version on a sample of devices is within 14 days of the current stable release.

Position 2: Centralised patching. The organisation uses a dedicated patch management tool (Intune, SCCM, PDQ Deploy, Ninite Pro, Chocolatey for Business, Kandji, Jamf, etc.) that explicitly covers the third-party applications in question, with scheduled deployment rings and verified rollout reporting.

Position 3: Active removal. The organisation has an approved application list, third-party applications outside that list are blocked or removed, and the applications that remain have been specifically included in Positions 1 or 2.

What does not work is "we rely on users to keep their own apps updated". This is an automatic fail regardless of whether the users actually do it. The scheme requires the organisation to have a documented, enforceable approach, not an individual-responsibility handwave.

A pattern I see often: an organisation has Chrome set to auto-update but has disabled auto-update on Firefox "because of an old extension compatibility issue" and forgotten to remediate. During the assessment that comes up as an explicit control failure because the browser exposed to the public internet is now outside the 14-day window.

What if the vendor has not released a patch yet?

You are inside the requirement for as long as no patch exists. The 14-day clock starts at vendor release. If a vulnerability is public but unpatched, you are compliant so long as you apply the patch within 14 days of the vendor eventually shipping it.

This does not mean you can ignore the unpatched vulnerability. The question set asks about your approach to "workaround controls" where no patch is available — for example, disabling a feature, blocking a port, or temporarily uninstalling an application. Assessors will usually accept a short-form policy that says: "Where no vendor patch is available for a critical vulnerability, we apply documented mitigations within 14 days of public disclosure and monitor vendor advisories daily." That satisfies the spirit of the rule without overcommitting you to something you cannot deliver.

What will fail is ignoring an unpatched vulnerability entirely. If an assessor finds evidence during the technical audit portion of Cyber Essentials Plus that an exploitable CVE has been public for weeks and no mitigation is in place, the certification will fail.

How do I meet the 14-day rule when users are on holiday?

This one is mostly a change management problem, not a technical one. The rule does not care where your users are. If a critical patch ships on 20 December and your organisation shuts down until 3 January, the patch still needs to be applied by 3 January — the 14-day window includes the break.

Practical patterns that work:

  • Remote deployment via MDM (Intune, Kandji, Jamf, etc.) so devices pick up patches when they next come online, regardless of whether the user is present
  • Forced restart schedules on power-on, so the device applies a pending patch the moment the user re-opens the laptop
  • A designated on-call patch responder over holiday periods (it does not have to be a senior engineer — someone who can trigger a pre-approved deployment and read the rollout report is enough)
  • Staggered shutdown policies so critical infrastructure (mail servers, VPN gateways, identity providers) has no scheduled downtime across holiday periods
  • The failure I see is organisations that interpret the 14-day rule as "14 working days". It is not. It is 14 calendar days, including weekends, bank holidays, and the break between Christmas and New Year.

    Is it a fail if I cannot patch a legacy server?

    It can be, but it does not have to be. There are three legitimate paths for a system that will not accept current patches.

    Path 1: Remove it from scope. If the legacy system does not genuinely need to be inside your Cyber Essentials scope, take it out. The scope is whatever you declare it to be, provided you can defend the declaration. A legacy finance server that is only accessed via a specific jump host, has no direct connectivity to the rest of the corporate network, does not hold personal data, and does not sit on the path to any in-scope service, can be legitimately excluded. But the exclusion needs to be real — network segmentation verified, no shared accounts, no trust relationship with in-scope systems.

    Path 2: Isolate it as a sub-set. If you keep it in the overall organisation but need to exclude it from the assessed scope, document it as a sub-set exclusion in the questionnaire. The scheme allows this. What it does not allow is implying isolation that does not exist. "It is on a separate VLAN" is not isolation if the VLAN has a route to the rest of the network. "It is air-gapped" is not isolation if engineers regularly copy files to it via USB.

    Path 3: Declare it compensating. If the system is in scope, cannot be patched, and cannot be removed, you can pass the questionnaire if you apply compensating controls — restrictive firewall rules, no direct internet exposure, strict access control, monitored logging — and explicitly document that this is a known exception. Assessors will probe this in feedback, and the posture has to be credible. What assessors reject is "we have compensating controls" as a sentence with nothing behind it.

    The phrase to avoid in the questionnaire is "we cannot patch this but it is fine". It is never fine. Either it is out of scope, sub-set excluded, or compensated for with documented controls. Pick one, and back it up with evidence.

    What evidence does an assessor actually want to see?

    For a self-assessed Cyber Essentials submission, the assessor cannot demand evidence directly — but your answers need to be specific and internally consistent enough that an assessor can tell the posture is real. The answers that pass first time tend to include:

  • A named patch management tool — not "we use automation" but "Microsoft Intune with update rings configured as follows…"
  • A stated patch cadence — weekly, bi-weekly, with a named owner
  • A third-party application policy — either a named allow-list, a tool that handles third-party patching explicitly, or both
  • A firmware patching approach — routers, switches, and firewalls patched at least quarterly and as critical CVEs demand
  • A documented workaround process — for CVEs without patches
  • An exception register — for devices that cannot be patched
  • For Cyber Essentials Plus, the assessor will sample devices during the technical audit and verify observed patch levels against what you claimed in the questionnaire. Inconsistencies here are a fail. If your questionnaire says all laptops are within 14 days of current Windows cumulative and the sample shows three laptops two cumulatives behind, that is the end of that audit.

    The cleanest 14-day rule posture for a UK SME

    If you are a small or mid-sized UK organisation without a dedicated patch team, this is the posture I most often see pass cleanly:

    1. Operating systems. Intune or Microsoft Update for Business on all Windows devices, Apple MDM (Jamf, Kandji, or Intune) on all Macs, Ubuntu Landscape or equivalent on Linux. Deployment rings set so critical patches hit all devices within 7 days of release, giving you a 7-day buffer before the 14-day limit.

    2. Browsers. Chrome and Edge allowed to auto-update, extensions explicitly allow-listed in the management console. Firefox either allowed to auto-update or removed from the allow-list. IE11 removed entirely.

    3. Third-party applications. A defined allow-list. Every application on the allow-list is either (a) auto-updating, (b) managed via a tool that covers third-party patching, or (c) documented as an exception with a named compensating control.

    4. Firmware. Router, firewall, and switch firmware reviewed monthly against vendor advisories, with any critical advisory applied within 14 days.

    5. Evidence. A single document that lists the above, references the specific tools, names the owner of each area, and includes the last three dates critical patches were deployed.

    If you can produce that document, you will pass this section of Cyber Essentials on the first submission.

    What to do if you are about to submit

    Before you click submit, run five quick checks:

    1. Pull a version audit on a sample of three devices. Are the OS, browser, and one key third-party application all within 14 days of current stable?

    2. Check your firewall or router's firmware version against the vendor's current release. If it is behind by more than a quarter, patch or flag it.

    3. Confirm your allow-list genuinely reflects what is installed. Run a discovery scan if you have to. Submissions fail when the policy says Chrome only and half the devices have Firefox.

    4. Identify any legacy system you meant to exclude and write the exclusion down formally before the assessor asks.

    5. Draft your patch management paragraph in the questionnaire with the tool name, cadence, owner, and exception process. Generic answers get feedback; specific answers get passes.

    If you want me to look at your posture before you submit, Fig Group's free readiness checker runs through this section explicitly.

    Bottom line

    The 14-day rule is strict but it is not unreasonable. The failures I see are almost always specificity failures — applicants who know they patch but cannot describe the mechanism in a way an assessor can verify. Pick a tool, document the cadence, deal with third-party applications explicitly, and write down your legacy exceptions. Organisations that take that hour before submission pass this control area without further feedback.

    Check your readiness | View pricing | Talk to an assessor

    About the author

    Jay Hopkins

    Jay Hopkins

    Managing Director, Fig Group

    IASME-licensed Cyber Essentials AssessorIASME Cyber Assurance Assessor

    Jay Hopkins is the Managing Director of Fig Group and an IASME-licensed Cyber Essentials assessor. He was previously Head of Technology for a global regulated firm. He works with UK organisations across regulated sectors on baseline compliance, supply-chain assurance, and AI-augmented security tooling.

    Connect on LinkedIn

    Want to see how Fig handles this?

    Discover how Fig helps organisations prepare for security assessments and maintain ongoing compliance.

    Request a demo
    JH

    Jay Hopkins

    Managing Director, Fig Group

    Jay Hopkins is the Managing Director of Fig Group and an IASME-licensed Cyber Essentials assessor. He was previously Head of Technology for a global regulated firm. He works with UK organisations across regulated sectors on baseline compliance, supply-chain assurance, and AI-augmented security tooling.