voila
voila is a web archiver with no publicly identified operator. Its purpose is to preserve a snapshot of public web pages so they remain available even after the original is changed or taken down.
Crawls are typically less frequent than search or AI crawlers, and the archived copy is usually served from a separate domain (the archive's), not from your origin.
Most archivers are aligned with the public-interest goal of historical preservation. Blocking them removes your site from that record.
See voila on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Periodic snapshots
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let voila through?
In most cases, yes. Archivers preserve the public-interest record. Blocking removes your site from that history. If volume gets noisy, rate-limit it before you block it outright.
Does blocking voila affect my Google rankings?
No. voila is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from voila?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be voila", not "the traffic is genuinely voila". If you need stronger guarantees, look for a reverse-DNS check or wait for the operator to publish IP ranges.
Why is voila archiving my site?
Archivers preserve the public record. Specific pages or domains get pulled into long-term storage so they're readable years later. Blocking removes you from that history, which is fine for some sites and a real loss for others.
Why can't I tell who operates voila?
Some bots run under generic User-Agent strings or are operated by smaller, less-documented companies. The pragmatic default is to treat unverified operators as untrusted traffic. If volume climbs, log the source IPs and check whether they cluster around a single network or ASN. That'll usually surface who's actually behind it.
What's the cleanest way to control voila?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.