perf: scale workers + per-tablet rate limiting for 20 concurrent users
The default 2-worker gunicorn could only serve 2 concurrent tablet requests, queueing the rest, and the rate limiter saw every tablet as the same Nginx container IP, so 20 users would have collectively burned through the 100 req/min general bucket. - gunicorn: 5 workers x 4 gthread, --forwarded-allow-ips=*, access log - uvicorn: 4 workers, --proxy-headers, --forwarded-allow-ips=* - RateLimitMiddleware: resolve real client IP from X-Forwarded-For -> X-Real-IP -> request.client.host - Bump rate_limit_general 100 -> 300 req/min/IP (per tablet now) - Flask: ProxyFix(x_for=1, x_proto=1, x_host=1) so request.remote_addr is the tablet IP, not the Nginx IP - APIClient: forward X-Forwarded-For + X-Real-IP to FastAPI for both JSON and multipart/files calls; safe no-op outside request context - 12 new tests (7 server + 5 client) covering header precedence, forwarding behavior and ProxyFix install Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
+9
-1
@@ -22,4 +22,12 @@ RUN pybabel compile -d translations
|
||||
|
||||
EXPOSE 5000
|
||||
|
||||
CMD ["gunicorn", "--workers", "2", "--bind", "0.0.0.0:5000", "app:create_app()"]
|
||||
CMD ["gunicorn", \
|
||||
"--workers", "5", \
|
||||
"--threads", "4", \
|
||||
"--worker-class", "gthread", \
|
||||
"--timeout", "60", \
|
||||
"--bind", "0.0.0.0:5000", \
|
||||
"--access-logfile", "-", \
|
||||
"--forwarded-allow-ips", "*", \
|
||||
"app:create_app()"]
|
||||
|
||||
Reference in New Issue
Block a user