Data provenance

Data provenance (also referred to as “data lineage”) is metadata that is paired with records that details the origin, changes to, and details supporting the confidence or validity of data. Data provenance is important for tracking down errors within data and attributing them to sources. Additionally, data provenance can be useful in reporting and auditing for business and research processes. 

Put most simply, the provenance of data helps to answer questions like “why was data produced,” “how was data produced,” “where was data produced,” “when was data produced,” and “by whom was data produced.” An example model of common provenance types can be seen below.

The W3 Provenance Incubator Group defines data provenance as

“a record that describes entities and processes involved in producing and delivering or otherwise influencing that resource. Provenance provides a critical foundation for assessing authenticity, enabling trust, and allowing reproducibility. Provenance assertions are a form of contextual metadata and can themselves become important records with their own provenance.”

Examples of fields supporting data provenance highlighted

The primary metadata points related to data provenance include:

  • Data origin
  • How data is transformed or augmented
  • Where data moves over time

The primary uses of data provenance include: 

  • Validation of data
  • Debugging of faulty data
  • Regeneration of lost or corrupted data
  • Analysis of Data Dependency
  • Auditing and Compliance Rationales

The extent to which data provenance is important to an organization and implemented is typically influenced by: 

  • An enterprise’s data management strategy
  • Regulations and Legal Requirements
  • Reporting requirements
  • Confidence requirements for critical segments of org data
  • Data impact analysis

The primary motivations for producing and preserving data provenance include:

  • Building credibility and trust in data, analysis and reporting
  • Protect reproducibility in reporting and analysis

Unstructured data and data provenance

An estimated 80-90% of organizational data is unstructured. Additionally, the web is by-in-large almost entirely unstructured. Combined, this means that most organizations deal with unstructured data as at least some portion of nearly all of their data-centered activities. Sources of unstructured data of use to organizations are also growing much faster (as a share of utilized data and in total) than structured and curated data silos. Big data in particular is dominated by unstructured data sources, with an estimated 90% of big data residing in this form. 

We should note that just because data is unstructured doesn’t mean it’s entirely chaotic. Rather, data is considered unstructured primarily because it (a) doesn’t reside in a database, or (b) doesn’t neatly fit into a database structure. 

Two of the main issues organizations face when dealing with unstructured data include: 

  • Being able to prepare and unbox exactly what data is saying
  • Being able to source the validity or origin of given data points (data provenance)

Diffbot’s Automatic Extraction APIs, Knowledge Graph, or Enhance are all potential solutions for these issues with unstructured data. 

Data Provenance and Diffbot

In a sense examples of data provenance are “facts about facts.” In Diffbot’s Knowledge Graph™ confidence scores are calculated for every fact as sources for data are compared and integrated into records. 

What is a confidence score in Diffbot’s Knowledge Graph?

Each fact has a confidence score that represents our level of belief on whether this fact is correct or not. A confidence score of one means that we’re absolutely certain that this fact is correct — while a confidence score of zero means we’re absolutely certain this fact is incorrect. We currently discard facts whose confidence score is below 0.5.
What is a “fact” within Diffbot’s Knowledge Graph? 
Within DIffbot’s Knowledge Graph, facts are sourced data points about a given entity. As entities within the Knowledge Graph are contextually linked, a fact may also pertain to multiple entities. An example of this could be seen through the statement “Albert Einstein won the Nobel Prize.” This fact could be included within the Albert Einstein entity as one of his “awards won.” Simultaneously the Nobel Prize entity could include Albert Einstein within a field “recipients of.” 
On average each of the entities within the Knowledge Graph has 22-25 facts. Each fact has an average of three origins or sources. The culmination of these validated facts of traceable origin is the single largest collection of web data that provides for data provenance. 

See also transparency and explainability in AI