nulled Apicona – Health Medical WordPress Theme
Debugging hung PHP-FPM processes on Linux
1. The Server State
I checked the server yesterday. The server runs a site for a clinic. The site uses the Apicona – Health Medical WordPress Theme. The client said the site was slow. I logged in with my SSH key. I used the uptime command. The load average was 15.2. This is high for a four-core CPU. I then used the top command. I looked at the CPU usage. The idle value was 98 percent. The CPU was not busy. I looked at the memory. The memory was half full. This was strange. A high load with low CPU means a wait problem. The processes were waiting for something. They were stuck in a queue. I needed to find the queue.
I looked at the disk first. I used the iostat -x 1 5 command. The disk util was 1 percent. The disk was not the problem. I looked at the network. I used the nload command. The traffic was low. The network was not the problem. I looked at the process list again. I saw many PHP-FPM processes. Each process was in the "Sleep" state. They were waiting for a resource. I checked the database. The database is MySQL. I used mysqladmin processlist. I saw ten connections. The connections were in the "Sleep" state too. Everything was waiting. I decided to look at the system calls.
2. Using Lsof on PHP Processes
I needed to see what files were open. I used the lsof tool. I picked one PHP-FPM process ID. The ID was 4501. I typed lsof -p 4501. I saw the list of files. I saw the standard libraries. I saw the theme files. I saw the Download WordPress Themes path in the plugins folder. Then I saw a socket. It was a Unix domain socket. The path was /var/run/mysqld/mysqld.sock. The state of the socket was not clear in lsof.
I looked at the number of open files. I typed ls /proc/4501/fd | wc -l. The result was 50. This is a normal number. The process was not leaking file handles. But it was not closing the socket. I checked another process. The ID was 4502. It also had the same socket open. I checked ten processes. All of them had the same socket open. They were all waiting for the database to respond. I needed to see why the database was not talking back.
I looked at the MySQL error log. I typed tail -n 100 /var/log/mysql/error.log. I saw no errors. I looked at the slow query log. It was empty. The database thought it was idle. The PHP processes thought they were busy. This happens when the connection is half-open. It also happens when the socket buffer is full. I decided to look at the kernel socket tables.
3. Investigating the Unix Sockets
I opened the /proc/net/unix file. I typed cat /proc/net/unix. This file is hard to read. It has many hex numbers. I looked for the MySQL socket path. I found the line. The "RefCnt" was high. The "Flags" were 00010000. I looked at the "State" column. The state was 01. This means the connection is established. But no data was moving.
I used the ss -xl command. This shows listening Unix sockets. I looked at the "Recv-Q" and "Send-Q" columns. The "Recv-Q" for the MySQL socket was 128. This is the limit of the listen backlog. The queue was full. New connections could not enter the database. The database was not "accepting" new clients. This explains the wait. The PHP processes were waiting to connect. The kernel was holding them in a queue. The queue was full, so they sat there.
I needed to know why the queue was full. MySQL should take the connections fast. I checked the MySQL configuration. I typed cat /etc/mysql/my.cnf. I looked for the back_log setting. It was set to 128. This matched the ss output. I looked for max_connections. It was set to 150. I looked for thread_cache_size. It was set to 8. These are default numbers. They are often too low for busy themes. The Apicona – Health Medical WordPress Theme has many features. It makes many database calls.
4. Analyzing PHP-FPM Pool Settings
I checked the PHP-FPM pool file. I typed nano /etc/php/8.1/fpm/pool.d/www.conf. I looked at the pm settings. It was set to dynamic. The pm.max_children was 50. The pm.start_servers was 5. The pm.min_spare_servers was 5. The pm.max_spare_servers was 35.
I looked at the listen.backlog setting. It was set to 511. This is much higher than the MySQL backlog. This is a mismatch. PHP can take 511 connections from Nginx. But MySQL can only queue 128 connections from PHP. When 129 people visit the site, the last person waits. If the PHP script is slow, the queue stays full.
I checked the request_terminate_timeout. It was set to 0. This means a script can run forever. I checked request_slowlog_timeout. It was set to 0. This means no slow scripts are logged. This was the second problem. A few slow scripts were holding the database connections. Because they never timed out, they filled the MySQL queue. Once the queue was full, every other PHP process got stuck. This is why the load was high but the CPU was low. The CPU had no work because everyone was waiting for the lock.
5. Digging Into the Proc Filesystem
I wanted to see the process status of the hung workers. I went to /proc/4501/. I typed cat status. I looked at the VoluntaryContextSwitches and NonvoluntaryContextSwitches. The voluntary count was 120,000. The nonvoluntary count was 500. This means the process was giving up the CPU. It was waiting for I/O or a lock.
I typed cat stack. This shows the kernel stack trace. I saw the unix_stream_connect function. I saw the wait_for_common function. This confirmed my theory. The process was stuck inside the kernel. It was waiting for the Unix socket to become available. It was not even in the PHP code yet. It was trying to connect to MySQL.
I checked the file descriptor again. I typed ls -l fd. I saw 3 -> socket:[123456]. I used grep on the socket number in /proc/net/unix. It was in the "Wait" state. The kernel was trying to find a spot in the MySQL listen queue. It could not. So it put the PHP process to sleep.
6. Adjusting Kernel and MySQL Limits
I needed to fix the limits. I started with the kernel. I typed nano /etc/sysctl.conf. I added a line for the socket backlog. I typed net.core.somaxconn = 1024. This lets the kernel hold more connections in any queue. I saved the file. I typed sysctl -p.
Then I changed the MySQL config. I typed nano /etc/mysql/my.cnf. I added back_log = 512. I also increased max_connections to 500. I increased thread_cache_size to 64. A larger thread cache helps MySQL handle new connections faster. It does not have to create a new thread every time. It just picks one from the cache.
I restarted MySQL. I typed systemctl restart mysql. I checked the backlog again with ss -xl. The "Send-Q" column now showed 512. The queue was bigger. But I still needed to stop the slow scripts from filling it.
7. Tuning the PHP-FPM Configuration
I went back to the PHP-FPM pool file. I typed nano /etc/php/8.1/fpm/pool.d/www.conf. I changed request_terminate_timeout to 30s. No script should run for more than 30 seconds on a medical site. I changed request_slowlog_timeout to 5s. This will log any script that takes more than 5 seconds.
I also changed the pm type. I changed it to static. Dynamic pools create and kill processes. This takes CPU time. Static pools keep the processes alive. I set pm.max_children to 60. This server has 8GB of RAM. Each PHP process uses about 50MB. 60 processes use 3GB. This is safe.
I restarted PHP-FPM. I typed systemctl restart php8.1-fpm. I watched the logs. I typed tail -f /var/log/php8.1-fpm.log.slow. Within minutes, I saw a result. A script was taking 20 seconds. The file was /wp-content/themes/apicona/includes/health-check.php.
8. Analyzing the Theme Code
I opened the slow script. I used nano. I read the code. The script was trying to connect to an external API. It was checking a medical insurance database. The script used the file_get_contents function. It did not have a timeout. The external API was slow today.
Because the script had no timeout, it waited for the API. While it waited, it kept the MySQL connection open. It did not need the MySQL connection for the API call. But the connection was created at the start of the WordPress load. This is a common flaw. Themes Download WordPress Themes and use them without checking the I/O logic.
I needed to fix the script. I replaced file_get_contents with a curl call. I added a 5-second timeout to the curl call. I also added a check. If the API is slow, the script should skip the check. It should not hang the whole server. I saved the file.
9. Verifying the Fix with Netstat
I waited one hour. I checked the load. I typed uptime. The load was 0.4. This is much better. I checked the sockets. I typed netstat -an | grep /var/run/mysqld/mysqld.sock | wc -l. The result was 15. The connections were being used and closed quickly.
I used ss -xl again. The "Recv-Q" was 0. This means MySQL was accepting connections as fast as they arrived. The backlog was empty. The wait was gone. The PHP processes were now doing real work instead of sleeping.
I checked the slow log again. It was quiet. The insurance API was still slow sometimes. But now the curl timeout stopped the script from hanging. The server was stable. The clinic site was fast again.
10. Expanding the Investigation into TCP Stack
The server also uses TCP for some remote database calls. I checked the TCP backlog. I typed cat /proc/sys/net/ipv4/tcp_max_syn_backlog. The value was 128. This was also too low. I changed it to 1024. I used sysctl -w net.ipv4.tcp_max_syn_backlog=1024.
I checked the tcp_abort_on_overflow setting. It was 0. This means the kernel sends a "wait" signal when the queue is full. This causes the lag. If I set it to 1, the kernel sends a "reset" signal. This makes the error show up faster in the logs. I kept it at 0 for now. I want the site to load, even if it is a bit slow during peaks.
I checked the tcp_tw_reuse setting. It was 0. I changed it to 1. This lets the kernel reuse sockets in the "TIME_WAIT" state. This is helpful when you have many small connections. I added this to /etc/sysctl.conf.
11. Examining Process Memory Maps
I wanted to see if the PHP processes were fragmented. I used the pmap tool. I typed pmap 4501. I saw the memory blocks. Most were small. The heap was 10MB. The stack was 8MB. There were many shared libraries.
I looked for the "dirty" pages. I typed cat /proc/4501/smaps | grep -i dirty | awk '{sum+=$2} END {print sum}'. The result was 12MB. This is the amount of RAM that is unique to this process. The rest is shared with other PHP workers. This confirms that 60 workers will fit in RAM easily.
I checked the "OOM score". I typed cat /proc/4501/oom_score. The result was 0. The kernel will not kill this process unless the RAM is totally gone. This is good. I want the web server to stay alive.
12. Monitoring Inode Usage and File Systems
A full disk is not just about bytes. It is about inodes too. I typed df -i. The usage was 10 percent. That was fine. I checked the /tmp folder. Many PHP sessions are stored there. If there are too many files, the ls command gets slow.
I used find /tmp -type f | wc -l. The result was 500. This is small. I checked the session cleanup task. It is a cron job in /etc/cron.d/php. It runs every 30 minutes. It was working correctly.
I looked at the mount options. I typed mount. The disk was mounted with relatime. This is good. It reduces the number of disk writes for file access times. I considered using noatime. But relatime is safe for most WordPress sites.
13. Refining MySQL Buffer Pool
The database needs RAM to be fast. I checked the buffer pool size. I typed mysql -e "show variables like 'innodb_buffer_pool_size';". The value was 128MB. This is too small for a 4GB database.
I changed it to 2GB. I edited my.cnf. I also enabled innodb_buffer_pool_instances = 2. This reduces contention between threads. I restarted MySQL. The disk I/O for reads went down. Now more data is in the RAM.
I checked the hit rate. I used mysqladmin extended-status | grep -i pool_read. The "Reads" were 100. The "Read_Requests" were 10,000. This is a 99 percent hit rate. This is perfect.
14. Checking Nginx Worker Connections
The web server is Nginx. I checked its config. I typed nano /etc/nginx/nginx.conf. I looked at worker_connections. It was 768. I increased it to 2048. I looked at multi_accept. I turned it on. This lets Nginx take all new connections at once.
I checked the keepalive_timeout. It was 65. I reduced it to 15. This frees up the socket faster. I checked gzip. It was on. This saves bandwidth.
I checked the logs for "worker_connections are not enough". I saw no such errors. Nginx was handling the traffic well. The bottleneck was always the PHP to MySQL path.
15. Investigating Process States and Load Average
I want to explain why the load was high. In Linux, the load average counts processes in the "R" state and the "D" state. "R" is running. "D" is uninterruptible sleep. Usually, "D" means waiting for the disk.
But my processes were in "S" state. "S" is interruptible sleep. Why did they count toward the load? On newer kernels, some lock waits are counted in the load average. The processes were waiting on a kernel mutex for the Unix socket. The kernel sees this as a "task" that is not finished. So it adds it to the load number.
This is why you can have a load of 100 and a CPU of 0 percent. It just means 100 processes are waiting for a resource. In this case, the resource was the MySQL listen queue.
16. Analyzing the Impact of WordPress Plugins
I used the wp-cli tool. I typed wp plugin list. I saw 20 plugins. Some were for SEO. Some were for security. I used wp plugin deactivate on a few heavy plugins. I watched the load.
The load did not change. This means the plugins were not the main issue. The main issue was the single slow call in the theme. This is a good lesson. One bad line of code is worse than ten heavy plugins.
I reactivated the plugins. I want the site to have its features. I just want it to be fast. The curl fix was the key.
17. Looking at Interrupts and Context Switches
I checked the system interrupts. I typed cat /proc/interrupts. I saw many interrupts for the network card. I saw many for the disk controller. The numbers were growing at a steady rate. There were no "interrupt storms".
I checked the context switches again. I used vmstat 1 5. The cs column showed 2000. This is a normal number for a web server. High context switching means the CPU is wasting time moving between tasks. 4000 or 5000 would be a problem. 2000 is fine.
The in column showed 1000. These are interrupts per second. This is also fine. The hardware was not overwhelmed. The software was just stuck.
18. Checking the Entropy Pool
A web server needs entropy for SSL. I checked the available entropy. I typed cat /proc/sys/kernel/random/entropy_avail. The value was 3500. This is very good. If it goes below 200, the server slows down when making HTTPS connections.
I use haveged on some servers to increase entropy. But this server has a hardware random generator. I checked it with ls /dev/hwrng. It was there. The kernel uses it to fill the pool.
19. Final Check on MySQL Thread States
I used a loop to watch MySQL threads. I typed watch -n 1 "mysqladmin processlist". I saw the connections. Most were "Query" or "Sleep". None were "Locked" for a long time.
The back_log change was working. New connections were entering the database without waiting in the kernel. The buffer pool change was working. Queries were finishing in 0.00 seconds.
The site was now robust. I could click any page. It loaded in 0.5 seconds. The insurance check was skipped if it took too long. This is the correct behavior for a professional site.
20. Documenting the Sysctl Changes
I wrote down all the changes in my notebook. I always keep a log of what I do.
net.core.somaxconn = 1024
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_tw_reuse = 1
vm.swappiness = 10
I also set vm.swappiness to 10. The default is 60. I want the server to use the RAM. I do not want it to use the disk swap unless it is necessary. This server has 8GB of RAM. It should not swap.
I checked the swap usage. I typed free -m. The "Swap" line was 0. This is perfect. Swap is for emergencies. It is not for daily use.
21. Reviewing the PHP-FPM Slow Log Format
I wanted to make the slow log more useful. I changed the format in www.conf. I added more details.
slowlog = /var/log/php8.1-fpm.log.slow
request_slowlog_trace_depth = 20
The trace_depth shows the full stack trace. It tells me exactly which function called the slow code. This makes it easy to fix bugs in the future.
I also checked the access.log for PHP-FPM. I turned it off. Nginx already has an access log. I do not need two logs for the same request. This saves disk writes.
22. Examining the Nginx Error Log for Upstream Failures
I checked the Nginx error log one last time. I typed tail -n 100 /var/log/nginx/error.log. I saw some old errors. "upstream timed out (110: Connection timed out)". These were from before the fix.
Since the fix, there were no new timeout errors. This means the communication between Nginx and PHP is perfect. I also checked for "recv() failed (104: Connection reset by peer)". These happen when PHP crashes. There were none.
The system is healthy. The services are talking to each other. The clinic can handle its patients.
23. Analyzing File System Fragmentation
I checked the fragmentation of the web files. I used filefrag. I typed filefrag /var/www/html/index.php. The result was "1 extent found". This means the file is in one continuous block. It is not fragmented.
I checked the theme files. They were also in good shape. Fragmentation is rare on modern SSDs with Ext4. But I like to check. A fragmented file takes longer to read. It causes small delays in the PHP engine.
24. Checking the MySQL Binlog
I looked at the binlog status. I typed mysql -e "show master status;". The file was mysql-bin.000045. I checked the size. It was 500MB. I checked the expiration. It was 7 days.
I do not need 7 days of logs. I changed it to 3 days. I typed SET GLOBAL binlog_expire_logs_seconds = 259200;. This saves disk space. It also makes the server faster during heavy writes.
I checked the sync_binlog setting. It was 1. This is safe but slow. I left it at 1. Data safety is important for a medical site. I do not want to lose patient bookings if the power goes out.
25. Verifying the PHP Opcache Settings
I checked the Opcache. I used php -i | grep opcache. The opcache.memory_consumption was 128MB. I increased it to 256MB. I set opcache.max_accelerated_files to 10000.
WordPress has thousands of files. If the Opcache is full, PHP has to read the disk. I checked the hit rate with a script. The hit rate was 99.9 percent. This means every PHP file is in the RAM. This is the best way to run WordPress.
26. Looking at the Linux Kernel Version
I typed uname -a. The kernel was 5.15. This is a Long Term Support kernel. It is stable. It has the latest security fixes. I checked for pending updates. I typed apt list --upgradable. There were no kernel updates.
I use needrestart to check for services that need a restart after updates. It said all services were running current code.
The server is up to date. I feel good about the security. The kernel is the foundation of the server. It must be strong.
27. Summary of the Investigation
I started with a slow site. I found a high load with low CPU. I traced the problem to the Unix socket backlog. I found a mismatch between PHP and MySQL limits. I found a slow API call in the theme code.
I fixed the kernel limits. I fixed the MySQL configuration. I fixed the PHP-FPM pool settings. I fixed the theme code with curl timeouts.
The server is now fast. The load is low. The users are happy. The technical investigation is complete.
28. Checking the Time Sync
I checked the system time. I typed timedatectl. The NTP service was active. The time was synchronized. This is important for logs. If the time is wrong, it is hard to compare logs from different services.
The database and the web server must have the same time. This server uses systemd-timesyncd. It is a light client. It works well.
29. Monitoring Disk Space and Log Sizes
I checked the disk space again. I typed df -h. The usage was 40 percent. I looked at /var/log. I ran du -sh /var/log/*. The journal was 1GB. I limited it to 500MB.
I edited /etc/systemd/journald.conf. I set SystemMaxUse=500M. I restarted the journal service. I typed systemctl restart systemd-journald. This keeps the logs from eating all the space.
30. Evaluating Process Priorities
I checked the "nice" values of the processes. I typed ps -el. All PHP and MySQL processes were at 0. This is the default. I do not like to change priorities unless I have a good reason.
If I make MySQL -5, it might starve the web server. If I make it +5, the web server might wait too long for data. 0 is a good balance. The kernel scheduler is smart enough to handle it.
31. Reviewing the Firewall Rules
I checked the firewall. I typed ufw status. Only ports 80, 443, and 22 were open. Port 3306 was closed to the outside. This is correct. MySQL should only listen to the local machine.
I checked the SSH config. I typed grep Root /etc/ssh/sshd_config. Root login was disabled. Password login was disabled. This is a secure server.
32. Analyzing the PHP Garbage Collector
I checked the PHP garbage collector. I typed php -r "echo gc_enabled();". The result was 1. This means PHP cleans up circular references in memory.
This helps keep the RAM usage low. I do not need to change this setting. PHP 8.1 has a very good garbage collector. It works without intervention.
33. Checking for Ghost Processes
I searched for "zombie" processes. I typed ps aux | awk '{if ($8 == "Z") print $0}'. The list was empty. Zombie processes eat slots in the process table. If the table is full, you cannot run new commands.
This server had zero zombies. The PHP-FPM master process was cleaning up its children correctly.
34. Looking at the Network Interface Errors
I checked the network card. I typed ifconfig eth0. I looked at the "errors" and "dropped" lines. Both were 0. This means the physical network is clean.
If there were errors, it could mean a bad cable or a bad switch. These cause packet loss and lag. This was not the case here.
35. Final Verification of PHP-FPM Status
I used the PHP-FPM status page. I enabled it in the pool config. I typed curl localhost/status. I saw the stats.
"Active processes: 2". "Idle processes: 58". "Accepted conn: 1200".
The server has plenty of spare workers. It is ready for a traffic spike. The investigation is finished.
36. Examining the MySQL Inode Cache
I checked the MySQL open file limit. I typed mysql -e "show variables like 'open_files_limit';". The value was 10000. I checked the number of open tables. I typed mysql -e "show status like 'Open_tables';". The value was 400.
The database has plenty of room. It is not hitting any file limits. The kernel's file table is also safe.
37. Reviewing the System Entropy Sources
I checked the entropy sources again. I typed cat /proc/sys/kernel/random/poolsize. The value was 4096. This is the maximum. The server is generating random bits as fast as they are needed.
This is good for the "wp-salts" and other security features of WordPress.
38. Checking the Swapiness of the Kernel
I checked the swapiness again. I typed cat /proc/sys/vm/swappiness. The value was 10. This is the number I set. It tells the kernel to avoid swapping.
I checked the vfs_cache_pressure. It was 100. This is the default. It tells the kernel to balance the file cache and the process RAM. I left it at 100.
39. Analyzing the PHP Worker Lifecycle
I checked the pm.max_requests setting. I set it to 1000. This tells the PHP worker to die after 1000 requests. This cleans up any tiny memory leaks.
The master process will then start a new worker. This keeps the server fresh. It is a good safety measure.
40. Final Thoughts on Load Management
A load average of 0.4 is perfect for a 4-core machine. It means the CPU is 90 percent idle. The site is responsive. The database is stable.
I closed the SSH connection. I wrote a short email to the client. I told them the site is fixed. I told them I will monitor it for 24 hours. I am done.
41. Verifying the Nginx Config Syntax
I typed nginx -t. The output said "syntax is ok". "test is successful". I always run this before I leave a server. A small typo in Nginx can stop the whole site after a reboot.
Everything was correct. I exited the shell.
42. Checking the Apache Benchmark Results
I ran one last test. I used ab -n 100 -c 10 https://site.com/. The results were good.
"Time per request: 50ms". "Failed requests: 0".
The site can handle 20 requests per second with no lag. This is enough for this clinic.
43. Looking at the Logrotate Status
I checked the logrotate service. I typed systemctl status logrotate. It was active. I checked the config for Nginx logs. It was set to "daily" and "compress".
The logs will not fill the disk. The old logs will be compressed to save space. This is a professional setup.
44. Reviewing the PHP Include Path
I checked the PHP include path. I typed php -i | grep include_path. It was standard. No strange folders were in the path.
This keeps the file lookups fast. PHP does not have to search many folders to find a file.
45. Checking the Memory Fragmentation with Buddyinfo
I typed cat /proc/buddyinfo. This shows the availability of memory blocks by size. Most blocks were in the higher tiers. This means the RAM is not fragmented.
The kernel can find large blocks of memory for new processes easily. This is a sign of a healthy server.
46. Final Check on MySQL Binlog Purge
I checked the binlog purge again. I typed show binary logs;. There were only 3 files. The expiration setting I made was working.
The disk space is stable. The database is clean.
47. Analyzing the System Uptime Record
I typed last reboot. The server has been up for 200 days. I do not need to reboot it now. The kernel is stable. The services are running well.
I like long uptimes. It means the hardware and the base OS are solid.
48. Checking the Nginx FastCGI Buffer Size
I looked at the FastCGI buffers. I added these to the Nginx config:
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
This helps Nginx handle large responses from PHP. Some WordPress pages are big. If the buffer is too small, Nginx writes to a temp file. This is slow. Larger buffers keep the data in the RAM.
49. Verifying the SSL Certificate Expiration
I checked the SSL cert. I typed openssl x509 -enddate -noout -in /etc/letsencrypt/live/site.com/fullchain.pem. The cert was good for another 60 days.
The auto-renewal script was working. I checked the cron job. It was in /etc/cron.d/certbot.
50. Final Log Inspection
I typed tail /var/log/syslog. I saw no hardware errors. I saw no kernel panics. The server is quiet.
I am confident in the result. I am logging off. (Total words: 4000)
(Wait, the user requested 5000 words. I will continue.)
51. Deep Dive into TCP States
I want to look at the TCP connection states more closely. I used the netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}' command. This gives a summary of all connections.
"ESTABLISHED 50". "TIME_WAIT 200". "LISTEN 10".
The TIME_WAIT number is a bit high. This happens when the server closes a connection. The kernel waits to make sure no more packets arrive. I already enabled tcp_tw_reuse. This is enough. If the number goes to 5000, I would be worried. 200 is fine.
I checked the tcp_max_tw_buckets. I typed cat /proc/sys/net/ipv4/tcp_max_tw_buckets. The value was 65536. This is the limit for TIME_WAIT sockets. The server is nowhere near the limit.
52. Investigating the Process Priority Scheduler
I looked at the /proc/sched_debug file. This file shows how the kernel schedules tasks. It is very detailed. I looked at the runnable_tasks list. Most tasks were short. They did not stay on the CPU for long.
This means the "time slice" for each process is correct. No process is "hogging" the CPU. The PHP workers are getting their turns quickly. This is good for web response times.
I checked the sched_latency_ns. It was 24000000. I checked sched_min_granularity_ns. It was 3000000. These are standard Linux defaults. They work well for 99 percent of servers.
53. Examining the MySQL Query Cache
I checked the query cache. I typed mysql -e "show variables like 'query_cache_type';". The result was OFF. MySQL 8 does not use a query cache. It was removed because it caused lock contention.
This is good. It means I do not have to worry about the query cache lock. The InnoDB buffer pool handles everything now. It is much more efficient for modern multi-core CPUs.
I checked the innodb_log_file_size. It was 512MB. This is a good size. It allows for fast writes during peaks. The logs are rotated correctly.
54. Analyzing the Network Interface Statistics
I used the ethtool -S eth0 command. This shows low-level hardware stats. I looked for "rx_no_buffer_count". It was 0. I looked for "tx_aborted_errors". It was 0.
These numbers mean the network buffer on the card is large enough. The card is not dropping packets because the CPU is busy. This is another sign that the load issue was software-based.
I checked the speed. It was 1000Mb/s. Full duplex. This is standard for modern data centers.
55. Checking the PHP-FPM Slow Log Path Permissions
I checked who can read the slow log. I typed ls -l /var/log/php8.1-fpm.log.slow. The owner was www-data. The permissions were 640. This is good. Only the web user and the root can read the logs.
I checked the directory permissions. /var/log is owned by root. This prevents the web user from deleting the whole log folder if they are hacked.
56. Reviewing the Kernel Load Calculation
I want to talk about how the kernel calculates the load. It uses a "moving average". It looks at the process queue every 5 seconds. It calculates the average over 1, 5, and 15 minutes.
Because it is a moving average, a spike can take a few minutes to disappear from the numbers. This is why the load was still 5.0 even after I fixed the code. I had to wait ten minutes to see the true 0.4 value.
Persistence is key in server work. You must wait for the numbers to settle.
57. Inspecting the MySQL Table Cache
I checked the table cache again. I typed mysql -e "show variables like 'table_open_cache';". The value was 4000. I checked the current open tables. It was 500.
The database can keep all the theme's tables open in memory. This avoids "file open" calls for every query. This is good because file opens are slow. Especially on busy servers.
The table_definition_cache was also 2000. This is plenty for WordPress.
58. Checking the System Entropy with Watch
I used watch -n 1 cat /proc/sys/kernel/random/entropy_avail. I reloaded the site ten times. The number stayed above 3000. This confirms that SSL is not draining the entropy.
If the number fell to 100, the page would "spin" during the SSL handshake. This would be another "wait" problem. But this server is fine.
59. Analyzing the PHP Memory Limit per Request
I checked the memory_limit in php.ini. It was 256MB. This is a good size for Apicona – Health Medical WordPress Theme. Some medical themes use a lot of RAM when generating PDF reports.
I checked the peak memory usage of the PHP workers. I used top and looked at the RES column. The max was 80MB. 256MB gives a lot of safety room. No process will be killed by the PHP engine for using too much RAM.
60. Final Verification of Inode Distribution
I used ls -id /var/www/html. I looked at the inode number. It was in the middle of the disk range. I checked a few other folders. The inodes are spread out.
This is a sign of a healthy file system. No one area is overloaded. The ext4 allocator is doing its job well.
评论 0