Uptime Kuma
Uptime Kuma is a developer or operator tool operated by Uptime Kuma. It runs on behalf of a site owner for monitoring, uptime checks, performance audits, or internal QA.
If you run the site, you are probably the customer of whoever is running this agent. Blocking it would hide your own monitoring data from yourself.
If you are an end-user surprised to see this in your logs, it is almost always something a third-party SaaS is doing on behalf of someone who manages your site.
See Uptime Kuma on your own site
Match the User-Agent header on incoming requests against the pattern below.
regex
For higher confidence, also verify the source IP against the operator's published ranges. UA strings can be spoofed; IP ownership is harder to fake.
Renders JavaScript
No
IP verification
User-Agent only
Crawl frequency
Scheduled probes
Honors robots.txt
Yes
Honors Crawl-delay
Varies
Should I let Uptime Kuma through?
In most cases, yes. Almost always run by you or your vendor. Blocking hides your own monitoring data. If volume gets noisy, rate-limit it before you block it outright.
Does blocking Uptime Kuma affect my Google rankings?
No. Uptime Kuma is not a search-engine crawler. Your ranking on Google or Bing is unaffected by what you do here.
How do I confirm a request is really from Uptime Kuma?
Look at the User-Agent header in your access logs and match it against the strings listed above. Worth knowing that the User-Agent is easy to fake, so this check tells you "the traffic claims to be Uptime Kuma", not "the traffic is genuinely Uptime Kuma". If you need stronger guarantees, look for a reverse-DNS check or wait for Uptime Kuma to publish IP ranges.
What's the best way to understand what Uptime Kuma is doing on my site?
Look at which URLs it hits, how often, and what time of day. The request pattern usually tells you whether it's building an index, watching for a specific change, or trying to pull data in bulk. The User-Agent name alone rarely tells the full story.
What's the cleanest way to control Uptime Kuma?
Two layers. Robots.txt for the polite crawlers that read it, and rules at your CDN or edge for the ones that don't. Rankly's Agent Experience handles both from a single config, so you can allow, block, rate-limit, or serve a stripped-down version per bot. Agent Analytics handles the observation half so you know which bots are actually worth a rule.