How is the number of threads in an application that uses nio and selectors lower than the thread per request model?

108 Views Asked by At

I am attempting to learn about non blocking io in java using nio channels and selectors,

Here is my understanding,

Thread per request model:

  • Each time the listening server socket accepts a new connection, a new thread gets created with the reference to the newly created client socket returned by the server socket.
  • This newly created thread for each connection is blocking in nature as it can only read new messages once it has finished processing any previously read message.

Non blocking io:

  • The client channel of a particular connection is registered with the selector initially with interest in read operation.
  • Once the event loop thread has read a message from the channel, it sends it to a thread pool that handles the business logic for further processing and the
  • Channel is unblocked and it (eventloop) can continue to read any new messages (hence non blocking)
  • Once the business logic thread finishes executing, it indicates the selector to wake up and then the eventloop thread becomes active and changes the interested operation to write for that particular client channel.
  • Message is written to channel and then the client eventually gets the response.
  • The interested operation is changed back to read for the channel.

My doubt is that in the case of non blocking io we are relying on a thread pool to handle business logic, so aren't we indirectly ending up in creating as many threads as messages received?

If so then why is it better to use nio as compared to regular io. Also if the number of threads in the threadpool are limited, won't the eventloop thread be blocked when attempting to submit to this threadpool?

please point out if any of the assumptions I made are incorrect as I am a beginner.

1

There are 1 best solutions below

1
dilettante On BEST ANSWER

The run-on sentences make it difficult to read, so I may not be answering what you're asking. But with non-blocking I/O, a single thread can be waiting on data availability on multiple connections (via a selector), independent of the number of connections.

Once you've completed a read on a particular connection, you have a choice:

  • You can immediately hand it off to a thread to process, in which case the number of threads needed will be equal to the maximum number of requests that arrive 'simultaneously'. This will probably be rather less than the number of connections.

  • You can queue the request for handling by a thread pool with a limit on the number of threads. i.e. you're willing to inject some delay in the interest of limiting resource use.

There's more to it than that, of course, depending on what exactly it takes to process the requests, but you should get the idea.