What We Found Analyzing 300 Yelp Reviews of a Michelin Reviewed Restaurant with Natural Language Processing

Reviews are a veritable gold mine of data. They’re one of the few times when unsolicited customers lay out the best and the worst parts of using a product or service. And the relative richness of natural language can quickly point product or service providers in a nuanced direction more definitively than quantitative metrics like time on site, bounce rate, or sales numbers.

The flip side of this linguistic richness is that reviews are largely unstructured data. Beyond that, many reviews are written somewhat informally, making the task of decoding their meaning at scale even harder.

Restaurant reviews are known as being some of the richest of all reviews. They tend to document the entire experience: social interactions, location, décor, service, price, and food.

Continue reading

From Knowledge Graphs to Knowledge Workflows

2020 was undeniably the “Year of the Knowledge Graph.”

2020 was the year that Gartner put Knowledge Graphs at the peak of its hype cycle.

It was the year where 10% of the papers published at EMNLP referenced “knowledge” in their titles.

It was the year over 1000 engineers, enterprise users, and academics came together to talk about Knowledge Graphs at the 2nd Knowledge Graph Conference.

There are good reasons for this grass-roots trend, as it isn’t any one company that is pushing this trend (ahem, I’m looking at you, Cognitive Computing), but rather a broad coalition of academics, industry vertical practitioners, and enterprise users that generally deal with building intelligent information systems.

Knowledge graphs represent the best of how we hope the “next step” of AI looks like: intelligent systems that aren’t black boxes, but are explainable, that are grounded in the same real-world entities as us humans, and are able to exchange knowledge with us with precise common vocabularies. It’s no coinincidence that in the same year that marked the peak of the deep learning revolution (2012), Google introduced the Google Knowledge Graph as a way to provide interpretability to its otherwise opaque search ranking algorithms.

The Risk Of Hype: Touted Benefits Don’t Materialize

Continue reading

Robotic Process Automation Extraction Is A Time Saver. But it’s Not Built For the Future

Enough individuals have heard the siren song of Robotic Process Automation to build several $1B companies. Even if you don’t know the “household names” in the space, something about the buzzword abbreviated as “RPA” leaves the impression that you need it. That it boosts productivity. That it enables “smart” processes. 

RPA saves millions of work hours, for sure. But how solid is the foundation for processes built using RPA tech? 

Related Reads: 

 

First off, RPA operates by literally moving pixels across the screen. Repetitive tasks are automated by saving “steps” with which someone would manipulate applications with their mouse, and then enacting these steps without human oversight. There are plenty of examples for situations in which this is handy. You need to move entries from a spreadsheet to a CRM. You need to move entries from a CRM to a CDP. You need to cut and paste thousands or millions of times between two windows in a browser. 

These are legitimate issues within back end business workflows. And RPA remedies these issues. But what happens when your software is updated? Or you need to connect two new programs? Or your ecosystem of tools changes completely? Or you just want to use your data differently? 

This shows the hint of the first issue with the foundation on which RPA is built. RPA can’t operate in environments in which it hasn’t seen (and received extensive documentation about). 

Continue reading

How to Track Market Indicators Using News Monitoring Scheduling

The public web is chock full of indicators with implications for stock prices, commodities prices, supply chain issues, or the general perceived value of an entity. But how do you reliably get these market indicators?

You can search online… and slog through the most popular pages that all your competitors have also looked at. Or you can read a commentator’s take. And likely stay one step removed from the actual information you should be dealing in.

Or you could deal directly with all of the articles on the web. Each annotated with helpful fields you can filter through like sentiment scores, AI-generated topic tags, what country the article was published in, and many others. That’s where Diffbot’s Knowledge Graph (KG) comes in.

The news index of Diffbot’s KG is 50x the size of Google News’ index. And each article entity in the KG is populated with a rich set of fields you can use to actually search the entire web (not just the portion of the web who paid to get in front of you).

In this guide we’ll work through how to set up a global news monitoring query in the KG. And then schedule this query to repeat and email you when new articles surface.
Continue reading

How to Estimate the Size of a Market with the Diffbot Knowledge Graph

Organizations are one of our most popular standard entities in the Diffbot Knowledge Graph, for good reason. Behind 200M+ company data profiles is an architecture that enables incredibly precise search and summarization, allowing anyone to estimate the size of a market and forecast business opportunity in any niche.

Pre-Requisites

Step 1 – Find Companies Like X

In a perfect world, every market and industry on the planet is neatly organized into well defined categories. In practice, this gets close, but not close enough, especially for niche markets.

What we’ll need instead is a combination of traits, including industry classifiers, keywords, and other characteristics that define companies in a market.

This is much easier to define by starting with companies we know that fit the bill. Think of it as searching for “companies like X”.

Box of Panettone cake

As an example, let’s start with finding companies like Bauducco, producer of this lovely Panettone cake. This is a market we’re hoping to sell say, a commercial cake baking oven to.

The closest definition of a market I might imagine for them is something like “packaged foods”. We could google this term and get some really generic hits for “food and beverage companies”, or we can do better.

We’ll start by looking this company up on Diffbot’s Knowledge Graph with a query like this

type:Organization homepageUri:”bauducco.com”

Next, click through the most relevant result to a company profile.

Now let’s gather everything on this page that describes a company like Bauducco.

Diffbot company profile page for Bauducco

Under the company summary, the closest descriptor to their signature Panettone is “cakes”. Note that.

Under industries, they might be involved in agriculture to some degree, but we’re not really looking for other companies that are involved in agriculture. “Food and Drink Companies” will do!

That’s it.

Now that we have these traits, let’s construct a search query with DQL:

type:Organization industries:"Food and Drink Companies" description:or("cakes", "cake")

Diffbot search results - 47,000 companies like Bauducco

Nearly 48,000 results! That’s a huge list of potential customers. Like the original google search, it’s a bit too generic to work with. Unlike results from Google though, we can segment this down as much as we’d like with just a few more parameters.

💡 Pro Tip: To see a full list of available traits to construct your query with, go to enhance.diffbot.com/ontology.

Step 2 – Remove Irrelevant Traits

What I’m first noticing is that there are a lot of international brands on this list. I’m interested in selling to companies like Bauducco in the U.S., so let’s trim this list to just companies with a presence in the United States.

type:Organization industries:"Food and Drink Companies" description:or("cakes", "cake") locations.country.name:"United States"

Diffbot search results - companies like Bauducco in the U.S.

Note that there are two “location” attributes. A singular and a plural version. The plural version (“locations”) will match all known locations of a company. The singular version (“location”) will only match the known headquarters of a company.

Down to 8800 results. Much better. We’re not really interested in ice cream companies in this market either (after all, we’re selling a baking oven), so we’ll use the not() operator to filter ice cream companies out.

type:Organization industries:"Food and Drink Companies" description:or("cakes", "cake") not(description:"ice cream") locations.country.name:"United States"
Let’s also say our oven is really only practical for large operations of at least 100 employees. We’ll add a minimum employee threshold to our query.

type:Organization industries:"Food and Drink Companies" description:or("cakes", "cake") not(description:"ice cream") locations.country.name:"United States" nbEmployeesMin>=100


262 results. Now we’re really getting somewhere. Let’s stop here to calculate our total addressable market.

Step 4 – Calculate Total Addressable Market

To calculate TAM, we simply multiply the number of potential customers by the annual contract value of each customer.

TAM = Number of Potential Customers x Annual Contract Value

At a $1M average contract value with 262 potential customers, our TAM is approximately $262M.

This is just a starting point of course, we’ll want to assess existing competition, pricing sensitivity, as well as how much of the existing market would be willing to switch for our unique value proposition. We’ll leave that for another day.

Takeaways

Try replicating these steps for a market of your choosing. The ability to filter and summarize practically any field in the ontology provides limitless potential for market and competitive intelligence.

Need some inspiration? Here’re some additional examples:

The Ultimate Guide To Data Analysis


Data analysis comes at the tail end of the data lifecycle. Directly after or simultaneously performed with data integration (in which data from different sources are pulled into a unified view). Data analysis involves cleaning, modelling, inspecting and visualizing data.

The ultimate goal of data analysis is to provide useful data-driven insights for guiding organizational decisions. And without data analysis, you might as well not even collect data in the first place. Data analysis is the process of turning data into information, insight, or hopefully knowledge of a given domain.
Continue reading

Stories By DQL: Tracking the Sentiment of a City


The story: sentiment of news mentions of Gaza fluctuate by as much as 2000% a week. 90% of news mentions about Minneapolis have had negative sentiment through the first week in June 2020 (they’re typically about 50% negative). Positive sentiment news mentions about New York City have steadily increased week by week through the pandemic.

Locations are important. They help form our identities. They bring us together or apart. Governance organizations, journalists, and scholars routinely need to track how one location perceives another. From threat detection to product launches, news monitoring in Diffbot’s Knowledge Graph makes it easy to take a truly global news feed and dissect how entities being talked about.

In this story by DQL discover ways to query millions of articles that feature location data (towns, cities, regions, nations).

How we got there: One of the most valuable aspects of Diffbot’s Knowledge Graph is the ability to utilize the relationships between different entity types. You can look for news mentions (article entities) related to people, products, brands, and more. You can look for what skills (skill or people entities) are held by which companies. You can look for discussions on specific products.
Continue reading

Stories By DQL: George Floyd, Police, and Donald Trump

We will get justice. We will get it. We will not let this door close.

– Philonise Floyd, Brother of George Floyd

News coverage this week centered on George Floyd, police, and Donald Trump. COVID-19 related news continue to dominate globally.
That’s the macro story from all Knowledge Graph article published in the last week. But Knowledge Graph article entities provide users with many ways to traverse and dissect breaking news. By facet searching for the most common phrases in articles tagged “George Floyd” you see a nuanced view of the voices being heard.

In this story hopefully you can begin to see the power of global news mentions that can be sliced and diced on so many levels. Wondering how to gain these insights for yourself? Below we’ll work through how to perform these queries in detail.


How we got there: Diffbot’s Knowledge Graph holds hundreds of millions of article entities at any given moment. These articles are of truly global origins, and are parsed by our cutting-edge machine vision and natural language processing systems to take unstructured article data and transform it into structured, query-able entities.

Continue reading