Skip to content
FigAI & Security
AI & Security

AI-Powered Compliance: How Codex, Claude, and Copilot Are Transforming Security Operations

Fig Group Editorial
11 min read
Share:

AI-Powered Compliance: How Codex, Claude, and Copilot Are Transforming Security Operations

The arrival of large language models (LLMs) in the enterprise - from OpenAI's GPT models powering Codex and ChatGPT, to Anthropic's Claude - is forcing a fundamental rethink of how security teams and compliance professionals work. These aren't replacements for your SIEM or compliance platform. Instead, they're augmenting human capability in ways that were impossible five years ago.

This article explores how AI coding assistants and LLMs are transforming compliance operations, compares the key tools, and explains where Fig fits in an AI-powered security stack.

The AI Revolution in Compliance: What Changed?

Historically, compliance operations were characterised by manual labour:

  • Auditors manually reviewed logs and evidence against frameworks
  • Developers wrote repetitive scanning and reporting code
  • Risk managers consolidated findings from multiple tools into spreadsheets
  • Policy teams maintained documentation in static Word documents
  • This manual work was necessary because nobody had built sophisticated platforms to automate it. But it was also slow, error-prone, and expensive.

    AI changes this equation by automating the judgment aspect of compliance work, not just the mechanical aspects.

    Three Core Ways AI Augments Compliance

    1. Natural Language Processing for Automated Evidence Interpretation

    LLMs can read unstructured logs, configuration files, and security reports, extract relevant evidence, and map it to compliance frameworks without human translation.

    Example: A firewall logs millions of events daily. A traditional compliance auditor would sample logs manually. An LLM-powered system can read the entire log, identify security-relevant events, extract evidence of firewall rules, and automatically map that evidence to NIS2 requirement 1.4 ("organisations have implemented and maintained a system to protect against denial of service attacks").

    2. Code Generation for Compliance Automation

    Rather than hiring developers to write custom scanning and reporting scripts, security teams can use AI coding assistants (Codex, Copilot, Claude) to generate code that:

  • Queries APIs from security tools
  • Transforms data between formats
  • Generates compliance reports
  • Monitors for compliance drift
  • Example: A risk manager wants to correlate vulnerability scan results with asset management data to identify "unpatched critical assets." Rather than waiting weeks for a developer to write a bespoke integration, they describe the requirement to Claude, which generates a Python script that connects Tenable's API, queries your CMDB, correlates the datasets, and outputs a report. The script is generated in minutes, not weeks.

    3. Intelligent Anomaly Detection and Risk Analysis

    LLMs can be fine-tuned to identify patterns in compliance data that humans miss - not because humans are bad at analysis, but because patterns emerge across thousands of data points.

    Example: An LLM trained on your organisation's access logs can identify unusual access patterns (user accessing systems outside their normal location at unusual times, privilege escalation attempts, rapid sequential file access) and flag them for investigation before they become incidents.

    The Key Players: Codex, Claude, and Copilot

    Let's compare the three most relevant AI tools for compliance operations:

    GitHub Copilot (Built on Codex)

    What it is: An AI coding assistant that uses OpenAI's Codex model (based on GPT-3.5 and GPT-4) trained on public code repositories to suggest code completions.

    Where it excels:

  • Speed: Instant code suggestions as you type
  • Integration: Direct integration with IDEs (Visual Studio Code, JetBrains, VS)
  • Cost: $10-$20 per developer per month (enterprise pricing available)
  • Language support: Works with 40+ programming languages
  • Limitations:

  • Trained on public code, so lacks context about your specific tools and systems
  • Suggestions are often boilerplate or generic
  • Less suitable for complex, domain-specific logic
  • Limited ability to understand your organisation's specific policies or standards
  • For compliance: Copilot is excellent for writing repetitive integration code (API queries, data transformations) but less useful for writing compliance-specific logic or audit procedures.

    OpenAI Codex (Direct Access)

    What it is: OpenAI's code generation model available through direct API access. Codex is the foundational model behind Copilot but accessible for custom applications.

    Where it excels:

  • Flexibility: Can be integrated into custom compliance applications
  • Control: You control the prompt engineering and fine-tuning
  • Sophistication: Can generate complex, multi-file codebases
  • Cost: Pay per token used (cheaper for high volume)
  • Limitations:

  • Requires technical integration (API calls, prompt engineering)
  • No IDE integration natively
  • Requires skilled engineers to get good results
  • Less intuitive than conversational interfaces like ChatGPT
  • For compliance: Codex is ideal if you're building custom compliance automation tools. Fig Group has integrated Codex into OpenAI Codex (their AI development platform) to automate control mapping and evidence generation.

    Claude (Anthropic)

    What it is: Anthropic's large language model, accessible through Claude.ai (web interface), API, or Claude for enterprise deployments. Claude is trained on diverse data including academic papers, books, code, and web content.

    Where it excels:

  • Nuance and context: Particularly strong at understanding complex, nuanced compliance requirements
  • Reasoning: Excels at explaining why a control maps to a requirement, not just that it does
  • Long context: Can process 100k-token inputs (roughly 75,000 words), making it ideal for reviewing entire policies or incident logs
  • Safety: Designed with constitutional AI methods to be more transparent about limitations
  • Conversation: Excellent for iterative refinement ("Here's our incident log, help me identify the compliance implications")
  • Limitations:

  • Slower than Copilot for simple code suggestions
  • Less training data on extremely niche compliance frameworks
  • API costs higher than Codex for high-volume use
  • For compliance: Claude excels at the interpretive, judgment-based aspects of compliance - mapping controls, analysing incidents, explaining requirements. It's less suitable for high-volume code generation.

    Practical Use Cases: How AI Augments Compliance Today

    Use Case 1: Automated Evidence Extraction

    Scenario: You're preparing for an ISO 27001 audit. The auditor wants evidence of your firewall rule set, access control policy, and incident logs from the past 12 months.

    Traditional approach:

  • Manually export firewall rules, convert to a readable format
  • Print access control policy from your wiki
  • Query logs and consolidate into a spreadsheet
  • Time: 40 hours
  • AI-augmented approach:

  • Codex generates a script to query your firewall API, security tool APIs, and log repository, extracting relevant evidence
  • Claude reads the unstructured evidence and maps it to ISO 27001 control requirements
  • Generate an audit-ready evidence pack in two hours
  • Tools used: Codex or Copilot for script generation; Claude for evidence interpretation

    Use Case 2: Incident Analysis and Compliance Mapping

    Scenario: Your organisation experiences a security incident - unauthorised access to a customer database. Your GRC team needs to determine whether this triggers regulatory notification obligations (GDPR, NIS2, Cyber Essentials impact assessment, etc.).

    Traditional approach:

  • Security team writes incident report
  • Compliance officer manually reviews each regulatory requirement
  • Legal team provides interpretation
  • Determine notification obligations
  • Time: 5-10 days
  • AI-augmented approach:

  • Security team documents incident in detail (what, when, who affected, systems impacted)
  • Claude reads the incident and automatically assesses regulatory notification obligations under GDPR, NIS2, UK FOIA, industry-specific regulations
  • Claude flags ambiguous cases for human review
  • Complete regulatory obligation assessment in 2 hours
  • Tools used: Claude for analysis and regulatory mapping

    Use Case 3: Policy and Procedure Generation

    Scenario: You need to document your incident response procedure to meet NIS2 requirements. Standard incident response procedures are 20+ pages.

    Traditional approach:

  • CISO writes outline
  • Security operations team provides detailed procedures
  • Legal reviews for regulatory alignment
  • Publish and train staff
  • Time: 4-6 weeks
  • AI-augmented approach:

  • Provide Claude with your existing incident response logs, your security team's notes, and NIS2 incident response requirements
  • Claude generates a draft incident response procedure that incorporates your actual practices and regulatory requirements
  • Security team reviews and refines the draft
  • Publish within one week
  • Tools used: Claude for procedure generation

    Use Case 4: Continuous Compliance Monitoring

    Scenario: You want to monitor for compliance drift - changes that violate your compliance requirements (e.g., a user being added to the "Domain Admins" group without authorisation).

    Traditional approach:

  • Configure alerts in your SIEM for specific events
  • Alert fatigue - security team receives hundreds of alerts daily
  • Most alerts are irrelevant
  • Real compliance violations are missed
  • Time to detect and remediate: days to weeks
  • AI-augmented approach:

  • Codex generates integration code to pull configuration changes from your directory, firewall, and systems
  • Claude fine-tuned on your compliance requirements identifies changes that represent actual compliance violations
  • High-signal alerts sent to compliance team for review
  • Automated remediation workflows for common violations
  • Time to detect: minutes; time to remediate: hours
  • Tools used: Codex for integration; Claude fine-tuned for compliance-specific anomaly detection

    The Limitations: Where AI Isn't Ready

    Despite impressive capabilities, LLMs have important limitations in compliance:

    1. Hallucination

    LLMs sometimes generate plausible-sounding but false information. In compliance, this is unacceptable - you can't have an audit report that contains made-up evidence.

    Mitigation: Use LLMs to augment, not replace, human judgment. Generate drafts that humans review, not final outputs that bypass human verification.

    2. Domain-Specific Knowledge Gaps

    LLMs are trained on general data. Hyper-specific compliance frameworks (CMMC 2.0, DORA, sector-specific standards) may not be well-represented in training data.

    Mitigation: Provide detailed context in prompts. Give Claude the exact regulatory text, not just the framework name.

    3. Inability to Access Real-Time Data

    LLMs have a knowledge cutoff. Regulations change; you need current data. An LLM trained on 2023 data won't know about 2026 regulatory updates.

    Mitigation: Integrate LLMs with APIs and databases that provide current data. Use them for logic and analysis, not for ground truth.

    4. Cost at Scale

    LLM API calls are cheap individually but expensive at scale. If you're processing terabytes of logs daily, LLM-powered analysis might be prohibitively expensive compared to traditional rules-based approaches.

    Mitigation: Use LLMs for judgment-based tasks, traditional automation for high-volume mechanical tasks.

    Where Fig Fits in the AI-Powered Compliance Stack

    Fig Group is not replacing LLMs or competing with Codex, Claude, or Copilot. Instead, Fig is the platform layer that sits above these AI tools and makes their compliance capabilities useful at scale.

    Here's how:

    1. Data Foundation

    Fig connects to 300+ security tools, creating a unified data model of your compliance and security posture. This data becomes the input to AI systems.

    Without Fig, you'd need to write separate Codex scripts to integrate each tool. With Fig, you query one unified API.

    2. Framework Integration

    Fig maps your security controls to 65+ compliance frameworks. This mapping becomes the context that AI systems use for analysis.

    Without Fig, you'd need to provide Claude with the full text of every regulatory requirement. With Fig, you describe your requirement once, and Claude has pre-loaded context about how it maps to your frameworks.

    3. Evidence Management

    Fig automatically collects evidence from your security tools. This evidence becomes the source of truth for AI analysis.

    Without Fig, you'd need to export evidence from multiple tools and combine it. With Fig, you query unified evidence already deduplicated and structured.

    4. Compliance Automation

    Fig automates the remediation workflows that AI recommends. When Claude identifies a compliance violation, Fig can automatically trigger remediation (disable a user, apply a security group, apply a patch).

    Without Fig, Claude can recommend fixes, but humans still execute them manually.

    Practical Example: AI-Powered Compliance with Fig and Claude

    Here's a concrete example of how Fig and Claude work together:

    Scenario: You're preparing for a CMMC 2.0 assessment. You want to verify that all privileged accounts have MFA enabled (a mandatory CMMC 2.0 control).

    Workflow:

    1. Data Collection (Fig): Fig queries your identity management system, SIEM, and access logs to identify all privileged accounts and whether MFA is enabled for each.

    2. Analysis (Claude): Claude reviews the list of privileged accounts, analyses the MFA status, and generates a report that not only lists which accounts lack MFA but explains the compliance implication and recommends remediation steps based on your actual infrastructure.

    3. Remediation (Fig): Fig can automatically enable MFA for compliant accounts or trigger a workflow to disable non-compliant accounts.

    4. Evidence (Fig): Fig generates an audit-ready evidence report showing the MFA status of all privileged accounts as of the assessment date.

    5. Assessment: The assessor reviews the evidence and verifies compliance without needing to independently audit your identity system.

    Key insight: Neither Fig nor Claude could do this alone. Figure provides the data and automation; Claude provides the judgment and explanation.

    Getting Started with AI-Powered Compliance

    If you're interested in augmenting your compliance operations with AI:

    1. Start small: Pick one compliance task (evidence collection, incident analysis, policy generation) and explore how Claude or Codex augments your approach.

    2. Build integration: Use Codex to generate scripts that integrate your security tools with Claude or your custom LLM applications.

    3. Create workflows: Build compliance workflows where AI augments human decisions (generate recommendations, humans approve and execute).

    4. Measure impact: Track time savings, error reduction, and assessment readiness improvements as AI becomes part of your compliance operations.

    5. Connect with Fig: If you're using multiple security tools, Fig's unified data model dramatically reduces the integration work required to feed data to AI systems.

    The Bottom Line

    AI isn't replacing compliance professionals. Instead, AI is augmenting compliance teams, enabling them to:

  • Process more data and identify patterns humans would miss
  • Generate documentation and evidence faster
  • Focus on judgment-based work (risk assessment, regulatory interpretation) rather than mechanical work (data collection, formatting)
  • Spend less time on audits and more time on security improvements
  • The organisations best positioned to benefit are those that combine three elements:

    1. Strong data foundations (platforms like Fig that provide unified compliance data)

    2. Skilled professionals who understand compliance, not just AI

    3. Thoughtful integration of AI into workflows where it augments, not replaces, human judgment

    In 2026 and beyond, AI-powered compliance will be standard. The question isn't whether to adopt these tools, but how to adopt them responsibly.

    Want to see how Fig handles this?

    Discover how Fig integrates with AI tools like Claude and Codex to automate security operations and compliance analysis.

    Request a demo