AI-powered Cyber Essentials assessment: what Fig does differently
Fig runs an AI-augmented assessment pipeline that is part of how the 6-hour certification guarantee works. This is the inside view of what the AI does, what it does not do, and why the certificate is still human-signed.
AI-powered Cyber Essentials assessment: what Fig does differently
Standfirst
Fig Group is the only IASME-licensed Cyber Essentials certification body in the UK publishing a 6-hour turnaround guarantee for compliant submissions. That SLA is not possible without AI-augmented workflow - but the certificate is still signed by a human, IASME-licensed assessor.
6 hours
UK's only sub-day Cyber Essentials SLA
AI-augmented
Triage, gap detection, and feedback drafting
Human-signed
IASME-licensed assessor on every certificate
What the AI is doing
Fig's assessment pipeline uses our own purpose-built AI model - trained in-house on the Cyber Essentials scheme requirements, hosted within Fig's infrastructure, and never sending customer submissions to third-party AI providers. Customer data stays inside the certification boundary at every step. The model identifies patterns in self-assessment evidence to do four things:
Submission triage
As soon as a self-assessment arrives, the AI reads every answer, classifies each question against the v3.3 scheme requirements, and flags the highest-risk inconsistencies. A typical 160-question CE submission has 3–8 answers that need human clarification. The AI surfaces them in the first minute; without AI triage, an assessor would spend 45–90 minutes doing the same scan.
Gap detection against v3.3
The AI checks every answer against the current scheme version and the MFA, patching, BYOD, and cloud-service rules added in v3.3. If an answer says "we enable MFA for most users" or "we patch within 30 days", the AI catches the non-compliance immediately and drafts a specific feedback paragraph the assessor can review.
Feedback generation
Rather than "re-submit with corrections", Fig's AI drafts a specific remediation paragraph per flagged item: what the answer said, why it fails v3.3, what the correct answer looks like, and how to verify the control before re-submitting. The human assessor reviews each draft before it goes to the customer. Customers see feedback in hours, not days.
Cross-reference checking
The AI compares the submission against prior submissions from the same customer (for renewals) and against a catalogue of common gotchas - the 14-day patching rule applied to third-party applications, auto-run disabled on Windows, tamper protection enabled on Defender, home router passwords for remote workers. These are the same checks a seasoned IASME assessor would make from muscle memory; the AI does it in parallel across every submission in the queue.
What the AI is not doing
Not issuing the certificate
The Cyber Essentials certificate is signed by a human IASME-licensed assessor. The IASME scheme rules require human assessor accountability - an assessor's name is on the certificate, and they take regulatory responsibility for it. The AI produces analysis and drafts; the assessor reviews, challenges, and approves.
Not making edge-case judgements
Some submissions contain genuine ambiguity: a non-standard network topology, an unusual SaaS architecture, a scope boundary that is not cleanly in or out. The AI flags these to a human assessor rather than attempting to resolve them. The assessor makes the judgement call, documents the reasoning, and records the decision on the submission.
Not reading unsubmitted evidence
The AI does not have access to anything the customer has not explicitly submitted. If the assessor needs to see a screenshot of the admin console or a policy document, that is still a human-to-human email and a human review of the attachment.
Not auto-correcting customer submissions
If the AI detects a problem, it generates feedback - the customer fixes the submission, not the AI. The control narrative is the customer's, not Fig's.
Why this delivers the 6-hour SLA
The delay in traditional CE assessment is not the review itself. An experienced IASME assessor can review a typical CE submission in 30–45 minutes. The delay is queue time - submissions sit in a queue waiting for an assessor to pick them up.
AI triage lets Fig's assessors work on the highest-value items first. A submission arrives at 09:30, the AI has it triaged by 09:32, the flagged items are visible to the assessor immediately, and by 12:30 the assessor has either approved the certificate or returned specific feedback. Before midday on a UK business day: 6-hour delivery is achievable.
For comparison, industry-average CE turnaround is 24–72 hours. That gap is queue time and administrative overhead - not assessment complexity.
The failure modes AI catches best
In the first 200 submissions processed through the AI-augmented pipeline, the most frequently-caught failure modes were:
1. MFA stated as enabled, but scope description implied partial coverage. (47 submissions - most common failure.)
2. Patch window stated as "monthly" which violates the 14-day rule. (38 submissions.)
3. BYOD policy exists but no technical enforcement described. (29 submissions.)
4. Home router answer "not applicable" with remote workers in the scope. (22 submissions.)
5. Cloud services listed but explicit scope treatment missing. (18 submissions.)
Every one of these is a scheme-specific failure mode that a human assessor would also catch - the difference is the AI catches them in 30 seconds instead of 30 minutes, and drafts the feedback immediately.
Customer trust and the IASME rules
IASME's scheme rules require that assessments are performed by named licensed assessors. Fig operates entirely within these rules: the AI is a tool the assessor uses, not a replacement for the assessor. Every certificate carries an IASME-licensed assessor's name. Every contentious judgement is documented by a human. The AI speeds up the parts that can be safely sped up - triage, gap detection, feedback drafting - and leaves the judgement to humans.
IASME has been consulted on Fig's pipeline. The pipeline is compatible with the scheme's governance.
What comes next
Two specific extensions of the AI pipeline are in development in 2026:
1. Structured evidence validation. For CE Plus, the AI will validate device sampling evidence against scheme requirements in real time, so the assessor's audit time is spent on human judgement rather than checklist verification.
2. Renewal pre-population. For renewals, the AI will pre-populate the self-assessment with answers from the prior year, flag the questions that are likely to need change under v3.3, and surface the deltas for the customer to review. This turns a 160-question renewal into a 20-question delta review.
Neither extension changes the core model: human assessor accountability, AI assistance, IASME scheme compliance.
Bottom line
The 6-hour certification guarantee is possible because AI removes queue time and administrative overhead from a human assessor's working day, not because AI replaces the assessor. The certificate is still signed by an IASME-licensed human. The customer gets a faster, more specific, more helpful assessment. The scheme's integrity is preserved.
Get Cyber Essentials certified in 6 hours | Read about the 6-hour guarantee | See pricing
About the author

Jay Hopkins
Managing Director, Fig Group
Jay Hopkins is the Managing Director of Fig Group and an IASME-licensed Cyber Essentials assessor. He was previously Head of Technology for a global regulated firm. He works with UK organisations across regulated sectors on baseline compliance, supply-chain assurance, and AI-augmented security tooling.
Next step
Want to see how Fig handles this?
Discover how Fig's built-in AI automates security operations and compliance analysis across your stack.
Request a demoMore from AI & Security