Sublinq
Sublinq is an SEO crawler operated by Sublinq. It maps link graphs, ranking signals, and on-page audits, usually for a SaaS product that helps marketers monitor their own or competitor sites.
Volume can be heavy. SEO crawlers often request every page on a site, and several can hit you in parallel if multiple customers are auditing your domain at once.
Most are well-behaved, respect robots.txt, and back off when rate-limited. The trade-off for allowing them is being visible inside the marketing tools your customers and competitors use.
See Sublinq on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Heavy on demand
Honors robots.txt
Yes
Honors Crawl-delay
Yes
Should I let Sublinq through?
In most cases, yes. Useful to be visible in SEO tooling, but volume can be heavy. Rate-limit to keep the load manageable. If volume gets noisy, rate-limit it before you block it outright.
Does blocking Sublinq affect my Google rankings?
No. Sublinq is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from Sublinq?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Sublinq", not "the traffic is genuinely Sublinq". If you need stronger guarantees, look for a reverse-DNS check or wait for Sublinq to publish IP ranges.
Why is a third-party tool crawling my site?
Someone, possibly a competitor running a backlink audit, possibly your own team, set up a job in Sublinq. The crawler runs on their schedule. Blocking it only blocks their visibility into your site, it doesn't break anything user-facing.
What's the cleanest way to control Sublinq?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.