Ubuntu server runs slowly and logs report the error: "Too many open files in system!" Solution!

A common pitfall for operations and maintenance in Ubuntu: "Too many open files in system!"

The server runs slowly, often reporting 520 or 521 errors, and the CPU and memory usage are not high, and the hard disk is not fully utilized. When encountering such problems, many people think that the problem lies with PHP. In fact, in addition to PHP performance, there is another important factor, that is, the file system resources are exhausted. It sounds incredible, that's right, the number of files that can be opened in Ubuntu has exceeded the limit!

After searching on the Internet, I found that there are many people who encounter this kind of problem! Here I will teach you how to solve this problem quickly!

!" This error is a common resource limitation issue on Linux/Ubuntu systems, indicating that the server has exhausted its file descriptors and cannot open new connections. In CyberPanel or LiteSpeed environments (such as HttpListener for web servers), processes such as Nginx/Apache/MySQL open too many sockets/files under high traffic conditions, resulting in "Too many open files" crashes. Your CPU/memory/hard disk is normal, and it is confirmed that the bottleneck is descriptors (the default user limit is 1024, the global limit is 65536). This will interrupt new requests and amplify the 520/504 error rate 50%+. Fortunately, the fix only requires adjusting ulimit and sysctl, and it will take 5-10 minutes to take effect.

"Too Many Open Files" Error Troubleshooting and Solution: Analysis of High Open File Count on Ubuntu Server

In Ubuntu/Linux server management, the "Too many open files in system!" error is a classic sign of resource exhaustion, indicating that the system's file descriptor limit has been reached, preventing new files or socket connections from being opened. Your logs show ulimit -n = 65535 (user/process limit) and /proc/sys/fs/file-max = 65535 (global system limit), but lsof | wc -l = 2211319 (current number of open files), far exceeding the upper limit. This isn't a hardware issue (CPU/memory/hard disk are normal), but rather a file handle leak or the accumulation of high-concurrency applications (such as web servers and databases) failing to close connections promptly. According to the Server Fault and Ask Ubuntu communities, cases like 90% stem from Nginx/Apache/MySQL process handles not being released, leading to an avalanche of traffic during peak traffic, causing 502/504 errors or service crashes. Fortunately, this can be quickly remedied by temporarily/permanently increasing the limit and conducting process diagnostics.

Troubleshoot open files: Who opened what?

Use the lsof tool (already installed) to diagnose by process/type:

lsof | wc -l

View the number of open files by process (to find the culprit):

lsof | awk '{print $2}' | sort | uniq -c | sort -nr | head -10 # lists the top 10 processes

Real-time monitoring: watch -n 5 'lsof | wc -l' Refresh every 5 seconds and observe the growth.

The system default maximum file quota is 65535. If the number of open files exceeds 65535, all open files need to be queued and access operations will become slow.

Solution: Raise limits and fix root causes

Edit the /etc/security/limits.conf file to increase the quota

sudo vi /etc/security/limits.conf

Scroll to the bottom and change all 65535 to 4194304

Then edit the /etc/sysctl.conf file

vi /etc/sysctl.conf

Modify the fs.file-max parameter:

fs.file-max = 4194304

Application profile:

sudo sysctl -p

After all settings are completed, restart sudo reboot.

After restarting, the speed has increased!

score

Leave a Reply

Your email address will not be published. Required fields are marked *