< Back to Knowledge Graph Glossary

Transparency and Explainability

Transparency and Explainability of AI systems are concepts used to rate and discuss how AI systems come to the conclusions they do. With greater and greater reliance on AI and neural networks which often operate as “black boxes” of sorts, come questions of the “why” and “how” of AI reasoning. 
All entities pulled from the web and structured with Diffbot data extraction products provide metadata about the origin and scoring of the projected validity of facts. This metadata provides a trace for users to “follow the logic” of our data extraction systems 
Also see, data provenance.