Pywikibot
Pywikibot is a single-page fetcher operated by Wikimedia. It fetches one page (or a small set) when triggered by a user action, typically a link being shared on social media, a messaging app, or an RSS reader.
Volume tracks shares and clicks rather than crawl schedules. A trending link can produce a sudden spike, but Pywikibot will not crawl the rest of your site.
Blocking it usually means the link previews on the corresponding platform stop showing your title, image, and description.
See Pywikibot on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Per user action
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Wikimedia runs 6 bots in total. Each one is a separate user-agent so you can allow or block them independently.
Link Unfurler
4- Citoid
- ZoteroTranslationServer
- Wikipedia Bot
- PywikibotYou are here
Brand Intelligence
1Generic Crawler
1Should I let Pywikibot through?
In most cases, yes. Fetchers power link previews and feed readers. Blocking breaks the user experience on social and messaging platforms. If volume gets noisy, rate-limit it before you block it outright.
Does blocking Pywikibot affect my Google rankings?
No. Pywikibot is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from Pywikibot?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Pywikibot", not "the traffic is genuinely Pywikibot". If you need stronger guarantees, look for a reverse-DNS check or wait for Wikimedia to publish IP ranges.
What breaks if I block Pywikibot?
Link previews, embeds, and unfurls on whatever surface Pywikibot feeds will stop rendering. Users sharing your URLs into Wikimedia will see a bare link instead of a rich card. Usually that's the first thing people regret blocking.
How is Pywikibot different from Wikimedia's other bots?
Wikimedia splits work across multiple user-agents so site owners can decide on each one independently. Training crawlers, live-fetch agents, search indexers, and agentic browsers each get their own name. Worth scanning the rest of the Wikimedia family above to see which ones actually matter for your site.
What's the cleanest way to control Pywikibot?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.