Excellent software and practical tutorials
Nginx The configuration file is mainly divided into four parts: main (global settings), server (host settings), upstream (upstream server settings, mainly reverse proxy, load balancing related configuration) and location (settings after the URL matches a specific location). The instructions set in the main part affect the settings of all other parts; the instructions in the server part are mainly used to set the virtual host domain name, IP and port number; the instructions in the upstream part are used to set a series of backend servers, set reverse proxy and load balancing of backend servers; the location part is used to match the location of the web page (for example, the root directory "/", "/images", etc.). The relationship between them: server inherits main, location inherits server; upstream will neither inherit instructions nor be inherited.
nginx.conf Configuration Files
The following is a detailed description of the nginx.conf configuration file (the following configuration parameters are not necessarily used in many cases, but are only used as a reference for configuration parameter descriptions. You can see the general version introduction below)
# defines the user and user group that Nginx runs
user www www;
#nginx process number, usually set to be equal to the number of CPUs
worker_processes 4;
# global error log definition type, [debug | info | notice | warn | error | crit]
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
# process pid file
#pid logs/nginx.pid;
# specifies the maximum number of descriptors that a process can open:
# working mode and upper limit of connection number
## This instruction refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes, but nginx allocates requests not so evenly, so it is best to keep it consistent with the value of ulimit -n.
#This is because nginx does not allocate requests to processes evenly during scheduling. So if you fill in 10240, when the total concurrency reaches 30,000-40,000, there may be more than 10240 processes, and a 502 error will be returned.
worker_rlimit_nofile 65535;
events {
# reference event model, use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; epoll model
# is a high-performance network I/O model in the Linux kernel version 2.6 or later. Linux recommends epoll. If running on FreeBSD, use the kqueue model.
# additional instructions:
# is similar to Apache. Nginx has different event models for different operating systems.
#A) Standard event model
#Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll.
#B) Efficient event model
#Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using kqueue on MacOS X systems with dual processors may cause kernel panic.
#Epoll: used in Linux kernel version 2.6 and later.
#/dev/poll: used on Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1A+.
#Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
Use epoll
# Maximum number of connections for a single process (maximum number of connections = number of connections + number of processes)
# is adjusted according to the hardware and used in conjunction with the previous work process. It should be as large as possible, but do not run the cup to 100%.
worker_connections 1024;
#keepalive timeout
keepalive_timeout 60;
# client request header buffer size. This can be set according to your system paging size. Generally, the size of a request header will not exceed 1k. However, since the system paging is generally larger than 1k, it is set to the paging size here.
# The page size can be obtained using the command getconf PAGESIZE.
#[root@web001 ~]# getconf PAGESIZE
# However, there are cases where client_header_buffer_size exceeds 4k, but the value of client_header_buffer_size must be set to an integer multiple of the "system paging size".
client_header_buffer_size 4k;
# will specify the cache for open files. It is not enabled by default. max specifies the number of caches. It is recommended to be consistent with the number of open files. inactive refers to the time after which the cache is deleted if the file is not requested.
open_file_cache max=65535 inactive=60s;
# refers to how often the cached valid information is checked.
# Syntax: open_file_cache_valid time Default value: open_file_cache_valid 60 Used in: http, server, location This directive specifies when to check the validity of cache items in open_file_cache.
open_file_cache_valid 80s;
#open_file_cache The minimum number of times a file is used within the inactive parameter time in the directive. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once within the inactive time, it will be removed.
# Syntax: open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Scope: http, server, location This directive specifies the minimum number of files that can be used within a certain time frame when the open_file_cache directive is invalid. If a larger value is used, the file descriptor is always open in the cache.
open_file_cache_min_uses 1;
# Syntax: open_file_cache_errors on | off Default value: open_file_cache_errors off Context: http, server, location This directive specifies whether cache errors should be logged when searching for a file.
open_file_cache_errors on;
}
# sets up an http server and uses its reverse proxy function to provide load balancing support
http{
# file extension and file type mapping table
include mime.types;
# default file type
default_type application/octet-stream;
# default encoding
charset utf-8;
# server name hash table size
#The hash table for storing server names is controlled by the directives server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the size of the processor cache. This makes it possible to speed up the lookup of hash table keys in the processor, reducing the number of accesses in memory. If the hash bucket size is equal to the size of the processor cache, then when looking up a key, the number of memory lookups in the worst case is 2. The first is to determine the address of the storage unit, the second is to find the key value in the storage unit. Therefore, if Nginx gives a hint that the hash max size or hash bucket size needs to be increased, the first thing to do is to increase the size of the former parameter.
server_names_hash_bucket_size 128;
# The buffer size of the client request header. This can be set according to your system's paging size. Generally, the size of a request header will not exceed 1k. However, since the system paging is generally larger than 1k, it is set to the paging size here. The paging size can be obtained using the command getconf PAGESIZE.
client_header_buffer_size 32k;
# client request header buffer size. By default, nginx uses the client_header_buffer_size buffer to read the header value. If the header is too large, it will use large_client_header_buffers to read it.
large_client_header_buffers 4 64k;
# sets the size of files uploaded via nginx
client_max_body_size 8m;
sendfile on; # turns on the efficient file transfer mode. The sendfile directive specifies whether nginx calls the sendfile function to output files. For ordinary applications, it is set to on. If it is used for downloading and other applications with heavy disk IO load, it can be set to off to balance the disk and network I/O processing speed and reduce the system load. Note: If the image does not display normally, change this to off.
#sendfile directive specifies whether nginx calls sendfile function (zero copy mode) to output files. For general applications, it must be set to on. If it is used for disk IO heavy load applications such as downloading, it can be set to off to balance the disk and network IO processing speed and reduce system uptime.
sendfile on;
# opens directory list access, suitable for download servers, closed by default.
autoindex on;
# This option enables or disables the use of the TCP_CORK option of socke. This option is only used when using sendfile.
tcp_nopush on;
tcp_nodelay on;
# long connection timeout, in seconds
keepalive_timeout 120;
#FastCGI related parameters are to improve the performance of the website: reduce resource usage and increase access speed. The following parameters can be understood by looking at their literal meanings.
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
#gzip module settings
gzip on; # turns on gzip compression output
gzip_min_length 1k; # minimum compressed file size
gzip_buffers 4 16k; # compression buffer
gzip_http_version 1.0; # compression version (default 1.1, if the front-end is squid2.5, please use 1.0)
gzip_comp_level 2; # compression level
gzip_types text/plain application/x-javascript text/css application/xml; # compression type, textml is already included by default, so there is no need to write it below. There will be no problem if you write it, but there will be a warning.
gzip_vary on;
# needs to be used when limiting the number of IP connections
#limit_zone crawler $binary_remote_addr 10m;
# load balancing configuration
upstream piao.jd.com {
#upstream load balancing, weight is the weight, which can be defined according to the machine configuration. The weight parameter represents the weight, and the higher the weight, the greater the probability of being assigned.
server 192.168.80.121:80 weight=3;
server 192.168.80.122:80 weight=2;
server 192.168.80.123:80 weight=3;
#nginx's upstream currently supports 4 types of distribution
#1, polling (default)
# Each request is assigned to different backend servers one by one in chronological order. If the backend server is down, it can be automatically removed.
#2, weight
# specifies the polling probability. The weight is proportional to the access ratio and is used when the backend server performance is uneven.
#For example:
#upstream bakend {
# server 192.168.0.14 weight=10;
# server 192.168.0.15 weight=10;
#}
#2, ip_hash
# Each request is assigned according to the hash result of the access IP, so that each visitor accesses a fixed backend server, which can solve the session problem.
#For example:
#upstream bakend {
# ip_hash;
# server 192.168.0.14:88;
# server 192.168.0.15:80;
#}
#3, fair (third party)
# distributes requests based on the response time of the backend server, with the one with the shortest response time given priority.
#upstream backend {
# server server1;
# server server2;
# fair;
#}
#4, url_hash (third party)
# distributes requests according to the hash result of the accessed URL, so that each URL is directed to the same backend server, which is more effective when the backend server is cached.
# Example: Add a hash statement in upstream. Other parameters such as weight cannot be written in the server statement. hash_method is the hash algorithm used.
#upstream backend {
# server squid1:3128;
# server squid2:3128;
# hash $ request_uri;
# hash_method crc32;
#}
#tips:
#upstream bakend{# defines the IP and device status of the load balancing device}{
# ip_hash;
# server 127.0.0.1:9090 down;
# server 127.0.0.1:8080 weight=2;
# server 127.0.0.1:6060;
# server 127.0.0.1:7070 backup;
#}
#Add proxy_pass http://bakend/ to the server that needs to use load balancing;
# The status of each device is set as:
#1.down means the server in front of the order is temporarily not participating in the load
#2.weight The larger the weight, the greater the weight of the load.
#3.max_fails: The default number of request failures allowed is 1. When the maximum number is exceeded, an error defined by the proxy_next_upstream module is returned.
#4.fail_timeout: The pause time after max_fails failures.
#5.backup: When all other non-backup machines are down or busy, the backup machine is requested. So this machine will have the least pressure.
#nginx supports setting up multiple groups of load balancing at the same time for use by different servers.
#client_body_in_file_only is set to On to record the data posted by the client into a file for debugging
#client_body_temp_path sets the directory of the record file. You can set up to 3 levels of directories
#location matches the URL. It can redirect or perform new proxy load balancing.
}
# virtual host configuration
server {
# listening port
listen 80;
There can be multiple # domain names, separated by spaces
server_name www.jd.com jd.com;
# default entry file name
index index.html index.htm index.php;
root /data/www/jd;
# performs load balancing on ******
location ~ .*.(php|php5)?$
{
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}
# picture cache time setting
location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$
{
expires 10d;
}
#JS and CSS cache time settings
location ~ .*.(js|css)?$
{
expires 1h;
}
# log format setting
#$remote_addr and $http_x_forwarded_for are used to record the client's IP address;
#$remote_user: used to record the client user name;
#$time_local: used to record access time and time zone;
#$request: used to record the requested URL and http protocol;
#$status: used to record the request status; success is 200,
#$body_bytes_sent: records the size of the file body sent to the client;
#$http_referer: used to record the page link from which the visit came;
#$http_user_agent: records relevant information of the client browser;
#Usually the web server is placed behind the reverse proxy, so it cannot obtain the client's IP address. The IP address obtained through $remote_add is the IP address of the reverse proxy server. The reverse proxy server can add x_forwarded_for information in the http header information of the forwarded request to record the original client's IP address and the server address requested by the original client.
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
# defines the access log of this virtual host
access_log /usr/local/nginx/logs/host.access.log main;
access_log /usr/local/nginx/logs/host.access.404.log log404;
# Enable reverse proxy for "/connect-controller"
location /connect-controller {
proxy_pass http://127.0.0.1:88; #Please note that the port number here cannot be the same as the port number listened by the virtual host (that is, the port listened by the server)
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
# backend web server can obtain the user's real IP through X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# The following are some reverse proxy configurations, optional.
proxy_set_header Host $host;
# The maximum number of bytes of a single file that a client is allowed to request
client_max_body_size 10m;
# The maximum number of bytes that the proxy buffers for client requests.
#If you set it to a relatively large value, such as 256k, then whether you use Firefox or IE browser to submit any image smaller than 256k, it will be normal. If you comment this directive and use the default client_body_buffer_size setting, which is twice the operating system page size, 8k or 16k, problems will occur.
# Whether using Firefox 4.0 or IE 8.0, submitting a relatively large image of about 200k will return a 500 Internal Server Error
client_body_buffer_size 128k;
# tells nginx to block HTTP response codes of 400 or higher.
proxy_intercept_errors on;
# backend server connection timeout_initiate handshake and wait for response timeout
Nginx connection timeout with backend server (proxy connection timeout)
proxy_connect_timeout 90;
Backend server data transmission time (agent sending timeout)
# backend server data transmission time_ means that the backend server must transmit all data within the specified time
proxy_send_timeout 90;
After the connection is successful, the backend server response time (proxy receiving timeout)
After # successfully connects, the time it takes to wait for the backend server to respond is actually the time it has entered the backend queue waiting for processing (it can also be said to be the time it takes for the backend server to process the request).
proxy_read_timeout 90;
# sets the buffer size for storing user header information in the proxy server (nginx)
# sets the buffer size for the first part of the response read from the proxied server. Usually this part of the response contains a small response header. By default, this value is the size of the buffer specified in the proxy_buffers directive, but it can be set to a smaller size.
proxy_buffer_size 4k;
#proxy_buffers buffer, the average web page is set below 32k
# sets the number and size of buffers used to read responses (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system.
proxy_buffers 4 32k;
# buffer size under high load (proxy_buffers*2)
proxy_busy_buffers_size 64k;
# sets the size of the data when writing to proxy_temp_path to prevent a worker process from being blocked for too long when passing files
# Set the cache folder size. If it is larger than this value, it will be transferred from the upstream server.
proxy_temp_file_write_size 64k;
}
# local dynamic and static separation reverse proxy configuration
# All jsp pages are handled by tomcat or resin
location ~ .(jsp|jspx|do)?$ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080;
}
}
}
Nginx general configuration file (here is the nginx.conf configuration under nginx1.14.2 on Windows system)
The following nginx.conf simply implements an example of nginx as a reverse proxy server at the front end, processing static files such as js and png, and forwarding dynamic requests such as jsp to other servers (refer to the above explanation for detailed explanation of relevant configuration parameters):
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
error_log logs/ssl.error.log crit;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 8088;
server_name localhost;
charset utf-8;
#access_log logs/host.access.log main;
# entry file settings
location / {
root D:/vueWork/dentalFactory/dist; # entry file directory
index index.html index.htm; # default entry file name
}
# tomcat reverse proxy configuration
location /location name/ {
proxy_pass http://192.168.1.10:8080; # http://127.0.0.1:8080/service name (project name)/
#proxy_set_header Host $host;
proxy_set_header Host $host:$server_port; //Note: If the port number of the current project is not 8080, this configuration is required, otherwise the backend cannot get the correct port number
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# proxy configuration of static resources (generally no configuration is required, here is an example of configuring image resources)
#location /images/ {
# root D:/A-studySpace/nginxDemo;
#}
#Here is where you configure the 404 page
#error_page 404 /404.html;
# This is the corresponding page configuration place for the request status
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# php reverse proxy configuration
# forwards all php page requests to php-fpm for processing
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; #fastcgi_param has many configuration parameters, adjust as needed
# include fastcgi_params;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}