Skip to contentAbout Fig Group

Data Discovery & Classification

Locate, classify, and monitor sensitive data across endpoints, cloud, and storage.

The challenge

Does this sound familiar?

Sensitive data sprawls untracked across endpoints, cloud storage, and databases. Classification is manual, inconsistent, and never complete. Regulatory obligations for personal data handling go unmet.

How Fig helps

Data Discovery & Classification with Fig

Content Scanning

Automated fingerprinting and pattern matching across endpoints, SharePoint, OneDrive, S3, and data warehouses. Classification rules map to GDPR, UK GDPR, and data protection frameworks.

Ownership Assignment

Every classified dataset records its business owner, data custodian, and retention rules with documented justification.

Exposure Monitoring

Continuous tracking of where sensitive data is accessed, transmitted, and at-risk. Alerts on unusual access patterns.

Compliance Evidence

Audit-ready reports proving data location, classification accuracy, and control effectiveness for GDPR, DORA, and ISO 27001.

Core Capability

Fig includes a data classification scanner that identifies assets with access to sensitive or confidential information. It integrates with the Microsoft ecosystem, and all outputs feed directly into the Fig application.

Audit-ready workflow

How Data Discovery & Classification becomes evidence

Data Discovery & Classification should not be treated as a standalone tool surface. In Fig it is part of a governed workflow: a signal is captured, an owner is assigned, a control or risk is updated, and evidence is retained so the organisation can prove what happened later.

Lifecycle

Where it sits in the operating model

The Discover phase is where this capability sits in the wider Fig operating model. Sensitive data sprawls untracked across endpoints, cloud storage, and databases. Classification is manual, inconsistent, and never complete. Regulatory obligations for personal data handling go unmet. Fig turns that problem into a repeatable lifecycle so MSPs, risk teams, and auditors are not relying on static spreadsheets or ad hoc screenshots when a buyer asks for proof.

Evidence captured

What auditors and buyers see

For data discovery & classification, useful evidence normally includes the triggering record, the affected asset or supplier, the control requirement, the assigned owner, the decision made, the timestamp, and the outcome. That evidence is mapped back to frameworks such as Cyber Essentials, ISO 27001, NIS2, DORA, GDPR, CMMC, and internal policy requirements where relevant.

Implementation checks

Four steps to roll this out

  • 01Define who owns data discovery & classification and what events should trigger review.
  • 02Connect the relevant source systems so evidence is collected continuously.
  • 03Map outputs to the frameworks and policies that matter to the organisation.
  • 04Review exceptions, accepted risks, and overdue actions before audit or renewal.

Useful references

Independent sources buyers and auditors recognise

The exact evidence required still depends on your scope, risk profile, sector, and framework obligations.

Built for you

Who uses this?

MSPs & MSSPs

Multi-client data classification standardised across workforces. White-label discovery and reporting for your MSP brand.

Learn more

Security & risk teams

Understand your actual data footprint and exposure. Direct integration with data governance and incident response teams.

Learn more

Compliance & audit

Structured evidence of data location, classification consistency, and access controls for data protection audits.

Learn more

Common questions

Frequently asked questions

How does Fig classify sensitive data?

Fig uses pattern matching, content fingerprinting, and customisable classification rules. You can add industry-specific or organisation-specific classifiers and refine them based on accuracy feedback.

Does this work with unstructured data?

Yes. Fig scans all forms of unstructured data including documents, emails, databases, and cloud storage. Cloud integrations are continuous, whilst endpoint scanning can be scheduled to minimise performance impact.

How often does data discovery run?

Cloud storage scanning runs continuously. Endpoint scanning can be scheduled to minimise performance impact. New data locations are flagged automatically as they appear across your environment.

Will this slow down our endpoints or file servers?

Endpoint scans are lightweight and scheduled outside business hours by default. You control scan windows and resource limits. Cloud storage scanning runs server-side and has no impact on endpoint performance at all.

How does Fig help with GDPR data subject access requests?

Fig can locate all instances of a specific individual's data across your scanned environments. This gives your DPO a clear map of where personal data sits, which significantly speeds up response times for subject access and deletion requests.

Next step

See Data Discovery & Classification in action.

Book a walkthrough tailored to your frameworks and tooling.