How We Increased Our Lead Contact Rate by 46% with Diffbot Enhance

Hi! This is Jerome from Diffbot. You might’ve seen us around before. We’re known for our automatic extraction APIs, and our knowledge graph of the public web. Today, I’d like to introduce you to Diffbot Enhance, lead enrichment anywhere you need it.

Lead enrichment doesn’t get enough credit

When I first saw it in action, it looked like a gimmick - just fields populated in a CRM sold with shockingly pricey annual contracts up-sold alongside Salesforce.

Like keeping your personal address book up to date. Helpful? Sure. Necessary? Not really.

Sales always insists it’s helpful though. I didn’t get it.

Fast forward a few years, we noticed one day that 62% of our inbound leads never make it to a demo call. 62%! These are people who choose to ignore the self-start trial option, fill out a 6 field form, pass a captcha, and click a button that literally says request a demo.

Screenshot of sign up modal on Diffbot's homepage

Continue reading

KnowledgeNet: A Benchmark for Knowledge Base Population

EMNLP 2019 paper, datasetleaderboard and code

Knowledge bases (also known as knowledge graphs or ontologies) are valuable resources for developing intelligence applications, including search, question answering, and recommendation systems. However, high-quality knowledge bases still mostly rely on structured data curated by humans. Such reliance on human curation is a major obstacle to the creation of comprehensive, always-up-to-date knowledge bases such as the Diffbot Knowledge Graph.

The problem of automatically augmenting a knowledge base with facts expressed in natural language is known as Knowledge Base Population (KBP). This problem has been extensively studied in the last couple of decades; however, progress has been slow in part because of the lack of benchmark datasets. 

 

Knowledge Base Population (KBP) is the problem of automatically augmenting a knowledge base with facts expressed in natural language.

 

KnowledgeNet is a benchmark dataset for populating Wikidata with facts expressed in natural language on the web. Facts are of the form (subject; property; object), where subject and object are linked to Wikidata. For instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage:

“Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brunn, in Moravia, and lived about 1756…”

KBP has been mainly evaluated via annual contests promoted by TAC. TAC evaluations are performed manually and are hard to reproduce for new systems. Unlike TAC, KnowledgeNet employs an automated and reproducible way to evaluate KBP systems at any time, rather than once a year. We hope a faster evaluation cycle will accelerate the rate of improvement for KBP.

Please refer to our EMNLP 2019 Paper for details on KnowlegeNet, but here are some takeaways:

  • State-of-the-art models (using BERT) are far from achieving human performance (0.504 vs 0.822).
  • The traditional pipeline approach for this problem is severely limited by error propagation.
  • KnowledgeNet enables the development of end-to-end systems, which are a promising solution for addressing error propagation.

Diffbot’s Approach to Knowledge Graph

Google introduced to the general public the term Knowledge Graph (“Things not Strings”) when they added the information boxes that you see to the right-hand side of many searches. However, the benefits of storing information indexed around the entity and its properties and relationships are well-known to computer scientists and have been one of the central approaches to designing information systems.

When computer scientist Tim-Berners Lee originally designed the Web, he proposed a system that modeled information as uniquely identified entities (the URI) and their relationships. He described it this way in his 1999 book Weaving the Web:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.

You can trace this way of modeling data even further back to the era of symbolic artificial intelligence (Good old fashioned AI”) and the Relational Model of data first described by Edgar Codd in 1970, the theory that forms the basis of relational database systems, the workhorse of information storage in the enterprise.

From “A Relational Model of Data for Large Shared Data Banks”, E.F. Codd, 1970

What is striking is that these ideas of representing information as a set of entities and their relations are not new, but are so very old. It seems as if there is something very natural and human about representing the world in this way. So, the problem we are working on at Diffbot isn’t a new or hypothetical problem that we defined, but rather one of the age-old problems of computer science, and one that is found within every organization that tries to represent the information of the organization in a way that is useful and scalable. Rather, the work we are doing at Diffbot is in creating a better solution to this age-old problem, in the context of this new world that has increasingly large amounts of complex and heterogeneous data.

The well-known general knowledge graphs (i.e. those that are not verticalized knowledge graphs), can be grouped into certain categories: the search engine company maintained KGs: Google, Bing, and Yahoo knowledge graph, community-maintained knowledge graphs: like Wikidata, and academic knowledge graphs, like Wordnet and ConceptNet.

The Diffbot Knowledge Graph approach differs in three main ways: it is an automatically constructed knowledge graph (not based on human labor), it is sourced from crawling the entire public web and all its languages, and it is available for use.

The first point is that all other knowledge graphs involve a heavy amount of human curation – involving direct data entry of the facts about each entity, selecting what entities to include, and the categorization of those entities. At Google, the Knowledge Graph is actually a data format for structured data that is standardized across various product teams (shopping, movies, recipes, events, sports) and hundreds of employees and even more contractors both enter and curate the categories of this data, combining these separate product domains together into a seamless experience. The Yahoo and Bing knowledge graphs operate in the similar way.

A large portion of the information these consumer search knowledge graphs contain is imported directly from Wikipedia, another crowd-sourced community of humans that both enter and curate the categories of knowledge. Wikipedia’s sister project, Wikidata, has humans directly crowd-editing a knowledge graph. (You could argue that the entire web is also a community of humans editing knowledge. However–the entire web doesn’t operate as a singular community, with shared standards, and a common namespace for entities and their concepts–otherwise, we’d have the Semantic Web today).

Academic knowledge graphs such as ConceptNet, WordNet, and earlier, CyC, are also manually constructed by crowd-sourced humans, although to a larger degree informed by linguistics, and often by people employed under the same organization, rather than volunteers on the Internet.

Diffbot’s approach to acquiring knowledge is different. Diffbot’s knowledge graph is built by a fully autonomous system. We create machine learning algorithms that can classify each page on the web as an entity and then extract the facts about that entity from each of those pages, then use machine learning to link and fuse the facts from various pages to form a coherent knowledge graph. We build a new knowledge graph from this fully automatic pipeline every 4-5 days without human supervision.

The second differentiator is that Diffbot’s knowledge graph is sourced from crawling the entire web. Other knowledge graphs may have humans citing pages on the web, but the set of cited pages is a drop in the ocean compared to all pages on the web. Even the Google’s regular search engine is not an index of the whole web–rather it is a separate index for each language that appears on the web . If you speak an uncommon language, you are not searching a very big fraction of the web. However, when we analyze each page on the web, our multi-lingual NLP is able to classify and extract the page, building a unified Knowledge Graph for the whole web across all the languages. The other two companies besides Diffbot that crawl the whole web (Google and Bing in the US) index all of the text on the page for their search rankings but do not extract entities and relationships from every page. The consequence of our approach is that our knowledge graph is much larger and it autonomously grows by 100M new entities each month and the rate is accelerating as new pages are added to the web and we expand the hardware in our datacenter.

The combination of automatically extracted and web-scale crawling means that our knowledge graph is much more comprehensive than other knowledge graphs. While you may notice in google search a knowledge graph panel will activate when you search for Taylor Swift, Donald Trump, or Tiger Woods (entities that have a Wikipedia page), a panel is likely not going to appear if you try searches for your co-workers, colleagues, customers, suppliers, family members, and friends. The former category are the popular celebrities that have the most optimized queries on a consumer search engine and the latter category are actually the entities that surround you on a day-to-day basis. We would argue that having a knowledge graph that has coverage of those real-life entities–the latter category–makes it much more useful to building applications that get real work done. After all, you’re not trying to sell your product to Taylor Swift, recruit Donald Trump, or book a meeting with Tiger Woods–those just aren’t entities that most people encounter and interact with on a daily basis.

Lastly, access. The major search engines do not give any meaningful access to their knowledge graphs, much to the frustration of academic researchers trying to improve information retrieval and AI systems. This is because the major search engines see their knowledge graphs as competitive features that aid the experiences of their ad-supported consumer products, and do not want others to use the data to build competitive systems that might threaten their business. In fact, Google ironically restricts crawling of themselves, and the trend over time has been to remove functionality from their APIs. Academics have created their own knowledge graphs for research use, but they are toy KGs that are 10-100MBs in size and released only a few times per year. They make it possible to do some limited research, but are too small and out-of-date to support most real-world applications.

In contrast, the Diffbot knowledge graph is available and open for business. Our business model is providing Knowledge-as-a-Service, and so we are fully aligned with our customers’ success. Our customers fund the development of improvements to the quality of our knowledge graph and that quality improves the efficiency of their knowledge workflows. We also provide free access to our KG to the academic research community, clearing away one of the main bottlenecks to academic research progress in this area. Researchers and PhD students should not feel compelled to join an industrial AI lab to access their data and hardware resources, in order to make progress in the field of knowledge graphs and automatic information extraction. They should be able to fruitfully research these topics in their academic institutions. We benefit the most from any advancements to to the field, since we are running the largest implementation of automatic information extraction at web-scale.

We argue that a fully autonomous knowledge graph is the only way to build intelligent systems that successfully handle the world we live in: one that is large, complex, and changing.

The Diffbot Master Plan (Part One)

Our mission at Diffbot is to build the world’s first comprehensive map of human knowledge, which we call the Diffbot Knowledge Graph. We believe that the only approach that can scale and make use of all of human knowledge is an autonomous system that can read and understand all of the documents on the public web.

However, as a small startup, we couldn’t crawl the web on day one. Crawling the web is capital intensive stuff, and many a well-funded startup and large company have gone bust trying to do so. Many of those startups in the late-2000s all raised large amounts of money with no more than an idea and a team to try to build a better Google. However they were never able to build technology that is 10X better before resources ran out. Even Yahoo eventually got out of the web crawling business, effectively outsourcing their crawl to Bing. Bing was spending upwards of $1B per quarter to maintain a fast-follower position.

As a bootstrapped startup starting out at this time, we didn’t have the resources to crawl the whole web nor were we willing to burn a large amount of investors’ money before proving the technology to ourselves.

So, we just decided to start developing the technology anyways, but without crawling the web.

We started perfecting the technology to automatically render and extract structured data from a single page, starting with article pages, and moving on to all the major kinds of pages on the web. We launched this as a paid API on Hacker News for developers, which meant that the only way we would survive was if the technology provided something of value that was better than what could be produced in-house or by off-the-shelf solutions. For many kinds of web applications automatically extracting structure from arbitrary URLs works 10X better compared to the approach of manually creating scraping rules for each site and maintaining these rulesets. Diffbot quickly powered apps like AOL, Instapaper, Snapchat, DuckDuckGo, and Bing, who used Diffbot to turn their URLs into structured information about articles, products, images, and discussion entities.

This niche market (the set of software developers that have a bunch of URLs to analyze) provided us a proving grounds for our technology and allowed us to build a profitable company around advancing the state-of-the-art in automated information extraction.

Our next big break came when we met Matt Wells, the founder of the Gigablast search engine, who we hired as our VP of Search. Matt had competed against Google in the first search wars in the mid-2000s (remember when there were multiple search engines?), had achieved a comparably-sized web index, with real-time search, with a much smaller team and hardware infrastructure. His team had written over half a million lines of C++ code to work out many of the edge-cases required to crawl the 99.999% of the web. Fortunately for us, this meant that we did not have to expend significant resources in learning how to operate a production crawl of the web, and could focus on the task of making meaning out of the web pages.

We integrated the Gigablast technology into Diffbot, essentially adding a highly optimized web rendering engine and our automatic classification and extraction technology to Gigablast’s spidering, storage, search, and indexing technology. We productized this as a product called Crawlbot, which allowed our customers to create their own custom crawls of sites, by providing a set of domains to crawl. Crawlbot worked as a cloud-managed search engine, crawling entire domains, feeding the urls into our automatic analysis technology, and returning entired structured databases.

Crawlbot allowed us to grow the market a bit beyond an individual developer tool to businesses that were interested in market intelligence, whether about products, news aggregation, online discussion, or their own properties. Rather than passing in individual URLs, Crawlbot enabled our customers to ask questions like “let me know about all price changes across Target, Macys, Jcrew, GAP, and 100 other retailers” or “let me build a news aggregator for my industry vertical”. We quickly attracted customers like Amazon, Walmart, Yandex, and major market news aggregators.

In the course of offering both the Extraction APIs and Crawlbot, our machine learning algorithms analyzed over 1 Billion URLs each month, and we used the fraction of a penny we earned on each of these calls to build a top-tier research team to improve the accuracy of these machine learning models. Another side-effect of this business model is that after 50 months, we had processed 50 billion URLs (and since our customers pay for our analysis of each URL, they are incentivized to send to us the most useful URLs on the web to process).

50 billion URLs is pretty close to the size of a decent crawl of the web (at least the valuable part of the web), and so we had confidence at this point that our technology could scale, both in terms of computational efficiency, as well as accuracy of the machine learning, as required by our demanding business customers. Because we had paid revenue from leading tech companies, we were confident that our technology surpassed what could be built in-house by their engineers. We had achieved many firsts: doing a full rendering at web scale and running sophisticated multi-lingual natural language processing and computer vision algorithms at web scale. So at this point, we started our own crawl of the web to fill in the gaps.

Our goal was to allow you to query for structured entities found anywhere across the web, but had to account for the fact that an entity (e.g. a person or a product) could appear on many pages on the web, and each appearance could contain differing sets of information with varying degrees of freshness. We had to solve the problem of entity resolution, which we call record linking, and resolving conflicts in the facts about the entity from different sources, which we call knowledge fusion. With these machine learning components in place, we were able to build a consistent universal knowledge graph, generated from an autonomous system from the whole web, another first. We launched the Knowledge Graph last year.

So in short, here is our roadmap so far:

  1. Build a service that analyzes URLs using machine learning and returns structured JSON
  2. Use the revenue and learnings from that to build a service on top of that to crawl entire domains and return structured databases
  3. Use the revenue and learnings from that to build a service that allows you to query the whole web like a database and return entities

While the Extraction APIs and Crawlbot served a very Silicon-valley centric developer audience, due to its requirement of passing in individual urls or domains, the Knowledge Graph serves the much larger market of information professionals with business questions. This includes the market of business analysts, market researchers, salespersons, recruiters, and data scientists, a much larger segment of the population as compared to software developers.

As long as a question can be formed as a precise statement (using the Diffbot query language) it be answered by the Diffbot Knowledge Graph, and all of the entities and facts that match the query can be returned no matter where they originally appeared on the web.

Knowledge workers rely on the quality of information for their day-to-day work. They demand ever-improving accuracy, freshness, comprehensiveness, and detail, metrics that are aligned with our mission to build a complete map of human knowledge. We have an opportunity to build the first power tool for searching the web for knowledge professionals, one that is not ad-supported freeware, but where our technical progress is aligned with the values of our customers.

Introducing the Diffbot Knowledge Graph

Meet the largest database of human knowledge ever created: Diffbot Knowledge Graph

Diffbot is pleased to announce the launch of a new product: Diffbot Knowledge Graph.

What is the Knowledge Graph?

Eight years ago, Diffbot revolutionized web data extraction with AI data extractors (AI:X). Now, Diffbot is set to disrupt how businesses interact with data from the web again with the all-new DKG (Diffbot Knowledge Graph).

“What we’ve built is the first Knowledge Graph that organizations can use to access the full breadth of information contained on the Web. Unlocking that data and giving organizations instant access to those deep connections completely changes knowledge-based work as we know it.”

– Mike Tung, founder and CEO of Diffbot.

Unlocking knowledge from the Web

Ever wished there was a search engine that gave you answers to your questions with data, rather than a list of links to URLs?

Using our trademark combination of machine learning and computer vision the DKG is curated by AI and built for enterprize, unlocking the entire Web as a source of searchable data. The DKG is a graph database of over 10 billion connected entities (people, companies, products, articles, and discussions) covering over 1+ trillion facts!

In contrast to other solutions marketed as Knowledge Graphs, the DKG is:

  • Fully autonomous and curated using Artificial Intelligence, unlike other knowledge graphs which are only partially autonomous and largely curated through manual labor.
  • Built specifically to provide knowledge as the end product, paid for and owned by the customer. No other company makes this available to their customers, as other knowledge graphs have been built to support ad-based search engine business models.
  • Web-wide, regardless of originating language. Diffbot technology can extract, understand, and make searchable any information in French, Chinese, and Cyrillic just as easily as in English.
  • Constantly rebuilt, from scratch, which is critical to the business value of the DKG. This rebuilding process ensures that DKG data is fresh, accurate, and comprehensive.

Why?

A Web-wide, comprehensive, and interconnected knowledge graph has the power to transform how enterprises do business. In our vision of the future, human beings won’t spend time sifting through mountains of data trying to determine what’s true. AI is so much better at doing that.

Right now, 30 percent of a knowledge worker’s job is data gathering. There’s a big opportunity in the market for a horizontal knowledge graph — a database of information about people, businesses, and things. Other knowledge graphs are little more than restructured Wikipedia facts with the simplest, most narrow connections drawn between. We knew we could do better.  So we’re building the first comprehensive map of human knowledge by analyzing every page on the Internet.

Knowledge is needed for AI

The other reason we’re building the DKG is to enable the next generation of AI to understand the relationships between the entities in the world it represents. True AI needs the ability to make informed decisions based on deep understanding and knowledge of how entities and concepts are linked together.

We’ve already seen some fantastic research from universities and industry built on top of the DKG – including the particularly interesting creation of a state-of-the-art Q&A AI, which has been very impressive.

Evolution from Data to Knowledge

There is a subtle but pivotal difference between data and knowledge. While data helps many businesses, knowledge has the power to be transformative for any business.

Define “Data”:

Facts and statistics collected together for reference or analysis.

Define “Knowledge”:

Facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject.

– Oxford Dictionary

The key to the DKG’s value is how it encompasses the whole Web, and how it joins together all the data points from many sources into individual entities, and  – importantly – how it then connects those entities together according to their relationships.

By building a practical contextual understanding of all data online, the DKG is able to answer complex questions like: “How many people with the skill “JAVA” who used to work at IBM as a junior, now work at Facebook as a senior manager?” by providing you with a number and a list of people who meet the criteria.

To access the DKG, Diffbot created a search query language called Diffbot Query Language (DQL). It’s flexible enough to let you perform granular searches to find the one exact piece of information you need out of the trillions, or to gather massive datasets for broad analysis. DQL has all the tools you need to access the world’s largest knowledge source with highly accurate, precise searches.

Ready to Use Now

Now, any business that wants instant access to all of the world’s knowledge can simply sign up for the DKG and turn the entire Web into their personal database for business intelligence across:

  • People: skills, employment history, education, social profiles
  • Companies: rich profiles of companies and the workforce globally, from Fortune 500 to SMBs
  • Locations: mapping data, addresses, business types, zoning information
  • Articles: every news article, dateline, byline from anywhere on the Web, in any language
  • Products: pricing, specifications, and, reviews for every SKU across major ecommerce engines and individual retailers
  • Discussions: chats, social sharing, and conversations everywhere from article comments to web forums like Reddit
  • Images: billions of images on the web organized using image recognition and metadata collection

Want to learn more about the Diffbot Knowledge Graph?


Knowledge Graph in the Press