mirrorweb
mirrorweb is a web archiver operated by Mirrorweb. Its purpose is to preserve a snapshot of public web pages so they remain available even after the original is changed or taken down.
Crawls are typically less frequent than search or AI crawlers, and the archived copy is usually served from a separate domain (the archive's), not from your origin.
Most archivers are aligned with the public-interest goal of historical preservation. Blocking them removes your site from that record.
See mirrorweb on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Periodic snapshots
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Mirrorweb runs 2 bots in total. Each one is a separate user-agent so you can allow or block them independently.
Web Archiver
1- mirrorwebYou are here
Search Engine
1Should I let mirrorweb through?
In most cases, yes. Archivers preserve the public-interest record. Blocking removes your site from that history. If volume gets noisy, rate-limit it before you block it outright.
Does blocking mirrorweb affect my Google rankings?
No. mirrorweb is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from mirrorweb?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be mirrorweb", not "the traffic is genuinely mirrorweb". If you need stronger guarantees, look for a reverse-DNS check or wait for Mirrorweb to publish IP ranges.
Why is mirrorweb archiving my site?
Archivers preserve the public record. Specific pages or domains get pulled into long-term storage so they're readable years later. Blocking removes you from that history, which is fine for some sites and a real loss for others.
How is mirrorweb different from Mirrorweb's other bots?
Mirrorweb splits work across multiple user-agents so site owners can decide on each one independently. Training crawlers, live-fetch agents, search indexers, and agentic browsers each get their own name. Worth scanning the rest of the Mirrorweb family above to see which ones actually matter for your site.
What's the cleanest way to control mirrorweb?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.