< Back to Knowledge Graph Glossary

Inference Engines

Inference Engines are a component of an artificial intelligence system that apply logical rules to a knowledge graph (or base) to surface new facts and relationships. Implementation of inference engines can proceed via induction or deduction. The process of inferring relationships between entities utilizing machine learning, machine vision, and natural language processing have exponentially increased the scale and value of knowledge graphs and relational databases in recent years. 

An example of deductive reasoning within an inference engine could be that San Francisco is in California. Therefore any entity located in San Francisco must also be located in California. An example of inductive reasoning within an inference engine could be that all tech companies with more than 100 employees tend to have CTO’s, and therefore any company meeting this criteria without a CTO must be missing a record. 

Historically inference engines surfaced as components within expert systems, systems meant to emulate the problem solving ability of a human expert within a given domain. 

Two methods employed within many inference engines to infer new knowledge include what are known as the backward chaining and forward chaining reasoning methods. 

  • Backward chaining reasoning methods begin with a list of hypotheses and work backwards to see if data, once plugged into rules support these hypotheses. In short, backward chaining highlights what facts must be true to support a hypothesis. 
  • Forward chaining reasoning methods start with available data and utilize rules to infer new data. In short, forward chaining starts with known facts and uses them to create new facts. 

Both backward and forward chaining reasoning progress according to the modus ponens form of deductive reasoning. In other words, X implies Y is true. X is true, and therefore Y must be true. 

An example of forward chaining would be to take the psuedocode example:

Rule1: Dog(x) => Mammal(x)

Which states that all dogs are mammals. And then to take every entity known to be a dog and create the facts that these are animals (e.g. Airbud is a mammal, Australian Shepherds are mammals, and so forth).

An example of backward chaining can be seen through a scenario in which the inference engine is aided by an interface for a human.

Assuming the same rule as above and the hypothesis to be checked “A Corgi is a mammal,” a backward chaining reasoning method could highlight the assertion needed to prove this hypothesis. This could occur by an interface asking a human “is a Corgi a dog?” When answered yes, the hypothesis “a Corgi is a mammal” would be validated. 

In many contemporary AI applications, both backward and forward chaining are applied in what is referred to as opportunistic reasoning. As one may expect, opportunistic reasoning applies each method of inference when it is most opportune for expanding the knowledge base. 

At it’s simplest form, the actions performed by inference engines tend to progress through three stages: match rules, select rules, execute rules. 

  • Match rules is an action in which an inference engine finds all rules triggered by the contents of a knowledge base. 
  • Select rules is an action which discerns which order rules should be applied in (this will differ for forward or backward chaining, or by other machine learning inputs)
  • Execute rules applies rules to existing knowledge through forward or backward chaining. 

Once execute rules is completed, match rules is re-started until there are no more opportunities for either forward or backward chaining deductions.