jpg-newsbot
jpg-newsbot is a single-page fetcher with no publicly identified operator. It fetches one page (or a small set) when triggered by a user action, typically a link being shared on social media, a messaging app, or an RSS reader.
Volume tracks shares and clicks rather than crawl schedules. A trending link can produce a sudden spike, but jpg-newsbot will not crawl the rest of your site.
Blocking it usually means the link previews on the corresponding platform stop showing your title, image, and description.
See jpg-newsbot on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Per user action
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let jpg-newsbot through?
In most cases, yes. Fetchers power link previews and feed readers. Blocking breaks the user experience on social and messaging platforms. If volume gets noisy, rate-limit it before you block it outright.
Does blocking jpg-newsbot affect my Google rankings?
No. jpg-newsbot is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from jpg-newsbot?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be jpg-newsbot", not "the traffic is genuinely jpg-newsbot". If you need stronger guarantees, look for a reverse-DNS check or wait for the operator to publish IP ranges.
What breaks if I block jpg-newsbot?
Link previews, embeds, and unfurls on whatever surface jpg-newsbot feeds will stop rendering. Users sharing your URLs into that platform will see a bare link instead of a rich card. Usually that's the first thing people regret blocking.
Why can't I tell who operates jpg-newsbot?
Some bots run under generic User-Agent strings or are operated by smaller, less-documented companies. The pragmatic default is to treat unverified operators as untrusted traffic. If volume climbs, log the source IPs and check whether they cluster around a single network or ASN. That'll usually surface who's actually behind it.
What's the cleanest way to control jpg-newsbot?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.