nginx 架构
worker 进程数配置
#根据线上实际CPU数据情况配置 worker_processes 8;
如何分配 worker 进程连接数的
nginx 会设置一个单个进程的可连接总数的 八分之一 作为标准, 当所剩的连接数已经小于 总连接数的八分之一时, 不去获取 accept_mutex 锁,等于让出获取连接的机会
负载均衡
定义 upstream 模块
location 模块 proxy_pass
策略
- 轮询
- 加权轮询
- ip_has
- least_conn
缓存静态资源
首先在http模块配置 proxy_cache_path
proxy_cache_path /data/nginx/cache keys_zone=cache_zone:10m;
-
定义 path , level, keys_zone
-
当levels=1:2时,表示是两级目录,1和2表示用1位和2位16进制来命名目录名称。在此例中,第一级目录用1位16进制命名,如b;第二级目录用2位16进制命名,如2b。所以此例中一级目录有16个,二级目录有16*16=256个
-
all active keys and information about data are stored in a shared memory zone, whose
nameandsizeare configured by thekeys_zoneparameter. One megabyte zone can store about 8 thousand keys.
然后server``location 中配置
- proxy_cache
- proxy_cache_key
proxy_cache_path /data/nginx/cache keys_zone=cache_zone:10m;
server {
...
location / {
proxy_pass http://backend;
proxy_cache cache_zone;
proxy_cache_key $uri;
}
}
配置用户的访问频率
使用 limit_req
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
...
server {
...
location /search/ {
limit_req zone=one burst=5;
}
The ngx_http_limit_req_module module (0.7.21) is used to limit the request processing rate per a defined key, in particular, the processing rate of requests coming from a single IP address. The limitation is done using the “leaky bucket” method.
- 注意,limit_rate 是用来 Limits the rate of response transmission to a client.
proxy_buffering
When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the [proxy_buffer_size] and [proxy_buffers] directives. If the whole response does not fit into memory, a part of it can be saved to a [temporary file] on the disk. Writing to temporary files is controlled by the [proxy_max_temp_file_size] and [proxy_temp_file_write_size] directives.
When buffering is disabled, the response is passed to a client synchronously, immediately as it is received. nginx will not try to read the whole response from the proxied server. The maximum size of the data that nginx can receive from the server at a time is set by the [proxy_buffer_size] directive.
Buffering can also be enabled or disabled by passing “yes” or “no” in the “X-Accel-Buffering” response header field. This capability can be disabled using the [proxy_ignore_headers] directive.
proxy_request_buffering
When buffering is enabled, the entire request body is read from the client before sending the request to a proxied server.
When buffering is disabled, the request body is sent to the proxied server immediately as it is received. In this case, the request cannot be passed to the next server if nginx already started sending the request body.
When HTTP/1.1 chunked transfer encoding is used to send the original request body, the request body will be buffered regardless of the directive value unless HTTP/1.1 is [enabled] for proxying.
proxy_http_version
Sets the HTTP protocol version for proxying. By default, version 1.0 is used. Version 1.1 is recommended for use with keepalive connections and NTLM authentication.
keepalive
| Syntax: | keepalive connections; |
|---|---|
| Default: | — |
| Context: | upstream |
Activates the cache for connections to upstream servers.
The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed.
For HTTP, the [proxy_http_version] directive should be set to “1.1” and the “Connection” header field should be cleared:
upstream http_backend {
server 127.0.0.1:8080;
keepalive 16;
}
server {
...
location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}
For FastCGI servers, it is required to set fastcgi_keep_conn for keepalive connections to work:
upstream fastcgi_backend {
server 127.0.0.1:9000;
keepalive 8;
}
server {
...
location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
...
}
}
proxy_next_upstream 重试
lvs
LVS 是 Linux Virtual Server 的简写
- 高性能
- 不支持正则表达式处理,不能做动静分离
Direct Routing
The load balancer simply changes the MAC address of the data frame to that of the chosen server and restransmits it on the LAN. This is the reason that the load balancer and each server must be directly connected to one another by a single uninterrupted segment of a LAN.
IP Tunneling
The load balancer examines the packet's destination address and port. If they are matched for the virtual service, a real server is chosen from the cluster according to a connection scheduling algorithm, and the connection is added into the hash table which records connections. Then, the load balancer encapsulates the packet within an IP datagram and forwards it to the chosen server. When an incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be again encapsulated and forwarded to that server. When the server receives the encapsulated packet, it decapsulates the packet and processes the request, finally return the result directly to the user according to its own routing table. After a connection terminates or timeouts, the connection record will be removed from the hash table.
NAT (Network address translation)
The load balancer examines the packet's destination address and port number. If they are matched for a virtual server service according to the virtual server rule table, a real server is chosen from the cluster by a scheduling algorithm, and the connection is added into the hash table which record the established connection. Then, the destination address and the port of the packet are rewritten to those of the chosen server, and the packet is forwarded to the server. When the incoming packet belongs to this connection and the chosen server can be found in the hash table, the packet will be rewritten and forwarded to the chosen server. When the reply packets come back, the load balancer rewrites the source address and port of the packets to those of the virtual service. After the connection terminates or timeouts, the connection record will be removed in the hash table.