WebZIP
WebZIP is an intelligence-gathering crawler with no publicly identified operator. It collects competitive, marketing, or threat-intel data from public web pages, usually on behalf of a buyer who wants visibility into a market or competitor set.
Volume is moderate but persistent. The crawler is interested in pricing pages, product pages, ad creatives, or whatever else its customers are tracking.
Whether to allow it is a strategy call. Some businesses want to be seen by their competitors; others would rather hide.
See WebZIP on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Heavy on demand
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let WebZIP through?
In most cases, yes. Whether to be seen by competitors is a strategy call. Rate-limit if volume gets noisy. If volume gets noisy, rate-limit it before you block it outright.
Does blocking WebZIP affect my Google rankings?
No. WebZIP is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from WebZIP?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be WebZIP", not "the traffic is genuinely WebZIP". If you need stronger guarantees, look for a reverse-DNS check or wait for the operator to publish IP ranges.
Why is a third-party tool crawling my site?
Someone, possibly a competitor running a backlink audit, possibly your own team, set up a job in this tool. The crawler runs on their schedule. Blocking it only blocks their visibility into your site, it doesn't break anything user-facing.
Why can't I tell who operates WebZIP?
Some bots run under generic User-Agent strings or are operated by smaller, less-documented companies. The pragmatic default is to treat unverified operators as untrusted traffic. If volume climbs, log the source IPs and check whether they cluster around a single network or ASN. That'll usually surface who's actually behind it.
What's the cleanest way to control WebZIP?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.