When a new WordPress site is published, it rarely stays unnoticed for long.
Automated bots continuously scan the internet looking for new servers, CMS installations, and potential vulnerabilities. Even small websites can be discovered quickly by these automated systems.
In this article, we analyze real server logs from a newly launched WordPress site and examine what kinds of bots appeared shortly after the site went online.
Timeline of the WordPress Site Launch
The following timeline shows when the site was set up and when the first automated requests appeared in the server logs.
18:00 – Domain registered
18:16 – First automated request recorded in the server logs
21:55 – WordPress configuration completed, including the login URL change
The domain for this site was registered at around 18:00 JST.
The first automated request appeared in the server logs at 18:16:43 JST, only about 16 minutes later.
WordPress setup and configuration continued afterward, and the login URL was changed later that evening as part of basic security hardening.
First Bot Access Recorded in the Server Logs
The earliest automated request recorded in the server logs appeared shortly after the domain was registered.
Example log entry:
22/Feb/2026:18:16:43 +0900
98.88.36.8 - "GET / HTTP/2.0"
"User-Agent: okhttp/5.3.0"This request occurred only about 16 minutes after the domain registration.
The user agent indicates that the request was made using OkHttp, a popular HTTP client library commonly used in automated tools, crawlers, and scanning systems.
Requests with this type of user agent are often generated by scripts or automated programs that check whether a server is reachable.
GET /
GET /favicon.ico
GET /wp-includes/images/w-logo-blue-white-bg.pngThis behavior suggests the bot was attempting to identify the website and determine whether it was running WordPress.
Automated Crawlers and Headless Browsers
Another interesting request appeared shortly afterward.
22/Feb/2026:22:35:07 +0900
IP: 3.238.171.161
GET /
User-Agent: HeadlessChrome/138Headless browsers are commonly used by:
- automated scanners
- website analysis tools
- vulnerability scanners
- indexing services
Unlike simple crawlers, headless browsers render the page just like a real browser. This allows scanners to analyze JavaScript behavior and page structure.
CMS Detection Scanners
Another request in the logs came from a bot identifying itself as:
User-Agent: CMS-Checker/1.0Example:
23/Feb/2026:00:55:27
IP: 35.224.48.77
GET /
User-Agent: CMS-Checker/1.0Tools like this attempt to determine:
- which CMS a site is running
- what plugins or themes may be installed
- whether known vulnerabilities might exist
Since WordPress is the most widely used CMS in the world, scanners frequently check for it.
Typical WordPress Resource Requests
Many bots in the logs requested common WordPress resources:
GET /favicon.ico
GET /wp-includes/images/w-logo-blue-white-bg.pngThese requests are not random.
They are often used to fingerprint WordPress installations.
For example:
/wp-includes/images/w-logo-blue-white-bg.pngis a default WordPress file- requesting it can help confirm that the site is running WordPress
Bots often combine these checks with additional probes to identify the site’s software stack.
Automated Requests from HTTP Libraries
Another user-agent found in the logs was:
User-Agent: okhttp/5.3.0Example log entry:
22/Feb/2026:22:34:08
IP: 3.238.171.161
GET /
User-Agent: okhttp/5.3.0OkHttp is a popular HTTP client library used in automated tools and scripts.
Requests using this user-agent are typically generated by:
- automated scanners
- data collection tools
- monitoring systems
- research crawlers
These bots often perform lightweight HTTP requests to quickly check if a server is reachable.
Search Engine Crawlers
Not all bots are security scanners.
The logs also show activity from Googlebot.
22/Feb/2026:23:28:57
IP: 35.198.213.229
GET /
User-Agent: Googlebot/2.1Search engine crawlers often appear soon after a new site becomes accessible, especially if the domain is linked somewhere or has been indexed previously.
WordPress Internal Requests
Some log entries were generated by WordPress itself.
22/Feb/2026:23:26:28
POST /wp-cron.php
User-Agent: WordPress/6.9.1This is WordPress’s internal scheduling system (WP-Cron) executing background tasks such as:
- scheduled publishing
- plugin tasks
- update checks
These requests are normal and not related to external scanning activity.
What These Logs Tell Us
Even within the first day after launch, the server logs already show multiple types of automated visitors:
- internet scanning platforms
- CMS detection tools
- headless browser scanners
- search engine crawlers
- automated HTTP clients
This highlights an important reality of running websites on the internet:
New sites are discovered and scanned very quickly.
Even small personal blogs can receive automated requests within hours of going online.
Why Bots Perform These Scans
Bots continuously scan the internet for potential targets.
Their goals may include:
- identifying WordPress installations
- detecting outdated plugins
- finding exposed files
- mapping the web for research purposes
Most of these scans are automated and indiscriminate. They do not target a specific website — they simply scan everything they encounter.
Security Implications
The early scanning activity observed in the logs highlights an important reality of the modern internet.
Newly launched websites are quickly discovered by automated scanners that continuously probe servers for common vulnerabilities and misconfigurations.
These bots often look for exposed configuration files, administrative interfaces, and CMS fingerprints in order to identify potential attack targets.
For WordPress site owners, this means that basic security practices should be considered even before a site becomes popular.
Measures such as:
- keeping WordPress updated
- restricting access to sensitive files
- monitoring server logs
can help reduce potential risk.
Conclusion
The server logs show that automated bots began interacting with the WordPress site within hours of its launch.
These early visitors included:
- Censys internet scanners
- CMS detection bots
- headless browser scanners
- search engine crawlers
- automated HTTP clients
This demonstrates how quickly new websites become visible on the internet and why even small sites should consider basic security practices from the start.
Understanding these logs is the first step toward understanding how automated reconnaissance works on the modern internet.
Even a newly created WordPress site can attract automated scanners within hours of going live.
Observing these patterns can help administrators better understand the background scanning activity that constantly occurs across the internet.
