我得到了很多499 NGINX错误码。我知道这是客户端的问题。这不是NGINX或我的uWSGI堆栈的问题。当a得到499时,我注意到uWSGI日志中的相关性。
address space usage: 383692800 bytes/365MB} {rss usage: 167038976
bytes/159MB} [pid: 16614|app: 0|req: 74184/222373] 74.125.191.16 ()
{36 vars in 481 bytes} [Fri Oct 19 10:07:07 2012] POST /bidder/ =>
generated 0 bytes in 8 msecs (HTTP/1.1 200) 1 headers in 59 bytes (1
switches on core 1760)
SIGPIPE: writing to a closed pipe/socket/fd (probably the client
disconnected) on request /bidder/ (ip 74.125.xxx.xxx) !!!
Fri Oct 19 10:07:07 2012 - write(): Broken pipe [proto/uwsgi.c line
143] during POST /bidder/ (74.125.xxx.xxx)
IOError: write error
我正在寻找一个更深入的解释,希望我的NGINX配置uwsgi没有问题。我只看表面。好像是客户的问题。
我知道这是一个旧线程,但它完全符合最近发生在我身上的事情,我想我应该在这里记录它。设置(在Docker中)如下:
nginx_proxy
nginx
Php_fpm运行实际的应用程序。
在应用程序登录提示时,现象为“502网关超时”。检查日志发现:
该按钮通过HTTP POST到/login…所以……
Nginx-proxy收到/login请求,并最终报告超时。
Nginx返回一个499响应,这当然意味着“主机死亡”。
登录请求根本没有出现在FPM服务器的日志中!
在FPM中没有回溯或错误消息…没有,零,zippo,零。
结果发现,问题是无法连接到数据库以验证登录。但如何弄清楚这一点完全是猜测。
The complete absence of application traceback logs ... or even a record that the request had been received by FPM ... was a complete (and, devastating ...) surprise to me. Yes, the application is supposed to log failures, but in this case it looks like the FPM worker process died with a runtime error, leading to the 499 response from nginx. Now, this obviously is a problem in our application ... somewhere. But I wanted to record the particulars of what happened for the benefit of the next folks who face something like this.
这并没有回答OPs的问题,但由于我在激烈地寻找答案后来到这里,我想分享我们的发现。
在我们的例子中,这些499是意料之中的。例如,当用户在某些搜索框中使用提前输入功能时,我们会在日志中看到类似的内容。
GET /api/search?q=h [Status 499]
GET /api/search?q=he [Status 499]
GET /api/search?q=hel [Status 499]
GET /api/search?q=hell [Status 499]
GET /api/search?q=hello [Status 200]
因此,在我们的情况下,我认为使用proxy_ignore_client_abort是安全的,这是在前面的回答中建议的。谢谢你!