ShapBot
ShapBot is a data-aggregation crawler operated by Parallel. It collects structured information from public web pages and resells it, packages it as a dataset, or makes it available to AI applications via an API.
Crawl patterns are persistent and broad. The goal is coverage of the long tail, so you will often see ShapBot requesting URLs that other crawlers ignore.
If your content is the source for downstream AI products, blocking this agent does not just deny one company; it tends to cut off a chain of customers who buy the data.
See ShapBot on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Periodic, broad
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let ShapBot through?
There's a real trade-off here. These crawlers turn your content into a downstream product. Block, monetize, or selectively allow based on the partnership. If Parallel actually drives traffic or citations back to you, letting it through usually pays for itself. If it just consumes bandwidth, block it.
Does blocking ShapBot affect my Google rankings?
No. ShapBot collects training data, not search-index pages. Your classical search rankings stay intact. The actual trade is whether you want your content folded into the next model release.
How do I confirm a request is really from ShapBot?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be ShapBot", not "the traffic is genuinely ShapBot". If you need stronger guarantees, look for a reverse-DNS check or wait for Parallel to publish IP ranges.
What happens to my content if I let ShapBot fetch it?
It gets pulled into Parallel's training pipeline and stored. Whether and how it influences a future model release is rarely disclosed. The only real lever you have on the outcome is what you allow at fetch time.
What's the cleanest way to control ShapBot?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.