Yes, even the biggest leaders in market intelligence. Even us.
Some focus solely on startups. Some only on venture-backed companies. But you probably wouldn’t even know. Because most won’t (or can’t) tell you what their data is biased towards! 🤭
“We have over 10M companies in our database!” is a meaningless statement if you can’t tell whether the data is a representative sample of Indian restaurants in the world, or perhaps more realistically, what they just happened to scrape.
Unless we’re talking at least 200M+ unique organizations strong, you’re looking at a biased dataset. And that’s still a conservative minimum.
This is common knowledge for data buyers, who make up for the lack of a known bias by evaluating datasets for known, easily verifiable data, like the Fortune 1000.
Given enough evaluation feedback cycles, most organization data brokers end up biased towards the Fortune 1000.
If your target is enterprise b2b, you’re in luck. You can find that data anywhere. Just check your spam folder.
And if your market intelligence application needs the closest thing to a truly representative sample of global organizations, it might seem impossible.
For data brokers, it just doesn’t make any sense to boil the ocean. It’s cheaper and easier to focus data entry resources on a few markets and whatever coverage gap feedback they get from lost deals.
Even if they did manage to compile all the companies on Earth, they would have to do it over and over again to keep their records fresh.
It’s an absurd and impractical human labor cost to maintain. So no one employs hundreds of people just to enter org data. Not even us.
We employ machines instead, which crawl millions of publicly accessible websites, interpret raw text into data autonomously, and structure each detail into facts on every organization known to the public web.
Which, as it turns out, is our known bias.