ruby on rails - Issues with nginx limit_req rate limiting - docs clarification? -



ruby on rails - Issues with nginx limit_req rate limiting - docs clarification? -

i having no end of problem getting rate limiting work on nginx passenger/rails.

part of confusion comes distinguishing between aspects of config work on per-client basis , global limits.

i'm having issues getting head around ideal setup nginx's limit_req , limit_req_zone configs. seems vaguely flip flop between language hints either user-specific or applies globally.

in docs quite vague how limit_req_zone line works. 'zone' global or per-user? given next line right in next conclusions:

limit_req_zone $binary_remote_addr zone=update_requests:1m rate=20r/s; $binary_remote_addr represents user's ip address this representation in particular preferable because takes less space $remote_addr? why of import or preferable? the 'zone' (in case) filled representations of ip address...? 'rate' rate @ requests allowed leave queue? this 'rate' , 'zone' - client-specific or global?

i'm unsure limit_req line, e.g. this:

limit_req zone=main_site burst=10 nodelay; not exclusively sure burst means. docs vague here too. guess number of requests. why number of requests, when rest of requests scheme uses bizarre 'zone' system? 'burst' requests per....what timeframe? 'nodelay', far understand, meant serve 503 error if have other requests in queue, rather waiting queue finish. a) wait how long? b) mean 'burst' setting ignored in case?

thanks.

some background info in case bored , wants have @ config , general issues we're trying resolve:

at moment have (extract):

limit_req_zone $binary_remote_addr zone=main_site:10m rate=40r/s; limit_req_zone $binary_remote_addr zone=update_requests:1m rate=20r/s; server { hear 80; server_name [removed]; root [removed]; include rtmp_proxy_settings; try_files $uri /system/maintenance.html @passenger; location @passenger { passenger_max_request_queue_size 0; # 256; limit_rate_after 2048k; limit_rate 512k; limit_req zone=main_site burst=10 nodelay; limit_conn addr 5; passenger_enabled on; passenger_min_instances 3; } location ~ ^/update_request { passenger_enabled on; limit_req zone=update_requests burst=5 nodelay; } gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/xml application/javascript text/javascript text/css; gzip_disable "msie6"; gzip_http_version 1.1; }

we have 2 zones defined:

a) "main_site", designed grab b) "update_request", js on client polls via ajax updated content when timestamp in little (cached) file changes

by nature tends mean have low traffic 1 or 2 minutes massive spike when potentially 10,000 clients nail server @ 1 time updated content (served db in different way depending on filters, access permissions, etc)

we finding during times of heavy load site grinding halt when cpu cores maxed out - had few bugs in our updating code meant when connection dropped queries queued , kept bogging server downwards until had take site downwards temporarily , forcefulness users logout , refresh browser... ddos'd ourselves :p think caused connectivity issues on our hosting company's side causing bunch of requests queue in user's browser.

while ironed out bugs warned clients might receive odd 503 "heavy load" message or see content not updating in timely fashion. original intent of rate limiting ensure everyday pages of site go on navigated around during heavy load, while rate limiting updated content.

however main issue seeing after bugs in updating code have been (hopefully) ironed out, can't quite strike balance on rate limiting. set seems generate unhealthy number of 503 errors in access logs whenever new piece of content added site (and pulled our users @ once)

we looking @ various solutions here in terms of caching ideally still protected kind of rate limiting doesn't impact users during day day operations.

which docs reading? http://nginx.org/en/docs/http/ngx_http_limit_req_module.html pretty clear regarding usage , syntax of directives.

regarding limit_req_zone:

yes. in example, allocating 1mb of space store list of "current number of excessive requests". less space each item/key uses, more can store. "if zone storage exhausted, server homecoming 503 (service temporarily unavailable) error farther requests." you need maintain track of clients should rate-limited. rate maximum number of requests client can create in specified period of time. the context limit_req_zone limited http, making global.

regarding limit_req:

once client has reached rate limit, client can go on making requests; however, server delay processing (in effort slow downwards client). if client continues create requests above rate limit, , sends @ to the lowest degree burst number of requests, server drop requests (instead of slowing down). 1 might utilize in effort fend of dos attacks or api abuse. burst requests not time-dependent. burst kicks in if client on rate limit. nodelay removes delay processing requests on burst value. if want no processing rate-limited clients, set burst 0 , utilize nodelay. wait/delay rate-limited clients depends on rate limit specified limit_req_zone.

ruby-on-rails nginx passenger

Comments

Popular posts from this blog

java Multi query from Mysql using netbeans -

c# - DotNetZip fails with "stream does not support seek operations" -

c++ - StartServiceCtrlDispatcher don't can access 1063 error -