seoscanners
seoscanners is a security scanner with no publicly identified operator. It probes websites for vulnerabilities, exposed credentials, misconfigurations, or compliance issues.
Whether to allow it depends entirely on who is running it. If it is your own pen-test vendor or your bug-bounty researchers, allow it. If it is hostile reconnaissance, block it.
Look at the IP source and the request pattern. Hostile scanners tend to probe known-vulnerable URLs aggressively; legitimate scanners usually identify themselves and crawl gently.
See seoscanners on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Variable / probing
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let seoscanners through?
Watch your logs for a week first. Allow your own pen-testers and bug-bounty researchers. Block hostile reconnaissance. Source IP and pattern tell you which is which.
Does blocking seoscanners affect my Google rankings?
No. seoscanners is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from seoscanners?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be seoscanners", not "the traffic is genuinely seoscanners". If you need stronger guarantees, look for a reverse-DNS check or wait for the operator to publish IP ranges.
Is seoscanners hostile traffic?
Depends entirely on the source. Penetration testers and bug-bounty researchers you've authorised should be allowed. Reconnaissance from random IPs probing for vulnerabilities should be blocked. The User-Agent alone doesn't tell you which is which, the source IP and request pattern do.
Why can't I tell who operates seoscanners?
Some bots run under generic User-Agent strings or are operated by smaller, less-documented companies. The pragmatic default is to treat unverified operators as untrusted traffic. If volume climbs, log the source IPs and check whether they cluster around a single network or ASN. That'll usually surface who's actually behind it.
What's the cleanest way to control seoscanners?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.