Server Fault Asked by Ben Voigt on February 4, 2021
Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions
Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn’t result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer
, Firefox network.http.max-connections-per-server
), but the method differs for each browser and version, and many users aren’t competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download.
How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn’t perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time to each server, delaying (but not dropping) the others.
That would require some kind of request queue and inter-process communication, which would make handling requests much slower. I'm not aware of any proxy that would support this.
Please note that most users don't really change the number of simultaneous connections in their browsers, so the issue is not really that inconvenient.
Answered by FINESEC on February 4, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP