After reading the famous C10k article and searching on the web about how things have evolved since it was written, I would like to know if it would be possible for a today's standard server to handle >10000 concurrent connections using a thread per connection (possibly with the help of a pool of threads to avoid the creation/killing process).
Some details that may affect the approach to the problem:
- Input, intermediate processing and output.
- Length of each connection.
- Technical specifications of the server (cores, processors, RAM, etc...)
- Combining this system with alternative techniques like AIO, polling, green threads, etc...
Obviously I'm not an expert in the matter, so any remarks or advices will be highly appreciated :)
The usual approaches for servers are either: (a) thread per connection (often with a thread pool), or (b) single threaded with asynchronous IO (often with epoll or kqueue). My thinking is that some elements of these approaches can, and often should, be combined to use asynchronous IO (with epoll or kqueue) and then hand off the connection request to a thread pool to process. This approach would combine the efficient dispatch of asynchronous IO with the parallelism provided by the thread pool.
I have written such a server for fun (in C++) that uses epoll on Linux and kqueue on FreeBSD and OSX along with a thread pool. I just need to run it through its paces for heavy testing, do some code cleanup, and then toss it out on github (hopefully soon).