Redactus Web Classifier
Redactus Web Classifier is a security scanner operated by Redactus. It probes websites for vulnerabilities, exposed credentials, misconfigurations, or compliance issues.
Whether to allow it depends entirely on who is running it. If it is your own pen-test vendor or your bug-bounty researchers, allow it. If it is hostile reconnaissance, block it.
Look at the IP source and the request pattern. Hostile scanners tend to probe known-vulnerable URLs aggressively; legitimate scanners usually identify themselves and crawl gently.
See Redactus Web Classifier on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Variable / probing
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let Redactus Web Classifier through?
Watch your logs for a week first. Allow your own pen-testers and bug-bounty researchers. Block hostile reconnaissance. Source IP and pattern tell you which is which.
Does blocking Redactus Web Classifier affect my Google rankings?
No. Redactus Web Classifier is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from Redactus Web Classifier?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Redactus Web Classifier", not "the traffic is genuinely Redactus Web Classifier". If you need stronger guarantees, look for a reverse-DNS check or wait for Redactus to publish IP ranges.
Is Redactus Web Classifier hostile traffic?
Depends entirely on the source. Penetration testers and bug-bounty researchers you've authorised should be allowed. Reconnaissance from random IPs probing for vulnerabilities should be blocked. The User-Agent alone doesn't tell you which is which, the source IP and request pattern do.
What's the cleanest way to control Redactus Web Classifier?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.