Home > Uncategorized > Tricks and Tips with AIO part 1: The frightening thread pool

Tricks and Tips with AIO part 1: The frightening thread pool

OK it is now time to start our NIO.2 (Asynchronous I/O) expedition with the Thread pool. Bouuu dead lock are watching you! If you are new with NIO, I recommend you take a look at my series on NIO.1(1,2,3,4,5,6) before jumping into NIO.2


One of the nice thing you can do with AIO is to configure yourself the thread pool the kernel will uses to invoke a completion handler. A completion handler is an handler for consuming the result of an asynchronous I/O operation like accepting a remote connection or reading/writing some bytes. So an asynchronous channels (with NIO.1 we had SelectableChannel) allow a completion handler to be specified to consume the result of an asynchronous operation. The interface define three “callback”:

  • completed(…): invoked when the I/O operation completes successfully.
  • failed(…): invoked if the I/O operations fails (like when the remote client close the connection).
  • cancelled(…): invoked when the I/O operation is cancelled by invoking the cancel method.

Below is an example (I will talk about it it much more details in part II) of how you can open a port and listen for requests:

// Open a port 
final AsynchronousServerSocketChannel listener = 
    AsynchronousServerSocketChannel.open().bind(new InetSocketAddress(port));
// Accept connections
listener.accept(null, new CompletionHandler<AsynchronousSocketChannel,Void>() {
    public void completed(AsynchronousSocketChannel channel,Void> result) {...}
    public void cancelled(AsynchronousSocketChannel channel,Void> result) {...}
    public void failed(AsynchronousSocketChannel channel,Void> result) {...}  

Now every time a connection is made to the port, the completed method will be invoked by a kernel’s thread. Do you catch the difference with NIO.1? To achieve the same kind of operation with NIO.1 you would have listen for requests by doing:

  SelectionKey key = iterators.next();
  if (key.isAcceptable()){
    // Do something that doesn't block
    // because if it blocks, no more connection can be 
    // accepted as the selector.select(..) 
    // cannot be executed

With AIO, the kernel is spawning the thread for us. Where this thread coming from? This is the topic of this Tricks and Tips with AIO.

By default, applications that do not create their own asynchronous channel group will use the default group that has an associated thread pool that is created automatically. What? The kernel will create a thread pool and manage it for me? Might be well suited for simple application, but for complex applications like the Grizzly Framework, relying on an ‘external’ thread pool is unthinkable as most of the time the application embedding Grizzly will configure its thread pool and pass it to Grizzly. Another reason is Grizzly has its own WorkerThread implementation that contains information about transactions (like ByteBuffer, attributes, etc.). At least the monster needs to be able to set the ThreadFactory!.

Note that I’m not saying using the kernel’s thread pool is wrong, but for Grizzly, I prefer having full control of the thread pool. So what’s my solution? There is two solutions, et c’est parti:

Fixed number of Threads (FixedThreadPool)

An asynchronous channel group associated with a fixed thread pool of size N creates N threads that are waiting for already processed I/O events. The kernel dispatch event directly to those threads, and those thread will first complete the I/O operation (like filling a ByteBuffer during a read operation). Once ready, the thread is re-used to directly invoke the completion handler that consumes the result. When the completion handler terminates normally then the thread returns to the thread pool and wait on a next event. If the completion handler terminates due to an uncaught error or runtime exception, the thread is returned to the pool and wait for new events as well (no thread are lost). For those cases, the thread is allowed to terminate (a new event is submitted to replace it). The reason the thread is allowed to terminate is so that the thread (or thread group) uncaught exception handler is executed.

So far so good? ….NOT. The first issue you must be aware when using fixed thread pool is if all threads “dead lock” inside a completion handler, your entire application can hangs until one thread becomes free to execute again. Hence this is critically important that the completion handler’s methods complete in a timely manner so as to avoid keeping the invoking thread from dispatching to other completion handlers. If all completion handlers are blocked, any new event will be queued until one thread is ‘delivered’ from the lock. That can cause a really bad situation, is it? As an example, using a Future when waiting for a read operation to complete can lock you entire application:

        Future result = ((AsynchronousSocketChannel)channel).read

            count = result.get(30, TimeUnit.SECONDS);
        } catch (Throwable ex){
            throw new EOFException(ex.getMessage());

Like for OP_WRITE, I’m pretty sure nobody will ever code something like that, right? Well, some application needs to blocks until all the bytes are arrived (a Servlet Container is a good example) and if you don’t paid attention, your server might hangs. Not convinced? Another example could be:

        channel.write(bb,30,TimeUnit.SECONDS,db_pool,new CompletionHandler() {

            public void completed(Integer byteWritten, DataBasePool attachment) {
                // Wait for a jdbc connection, blocking.
                MyDBConnection db_con = attachment.get(); 

            public void failed(Throwable exc, DataBasePool attachment) {

            public void cancelled(DataBasePool attachment) {

Again, all threads may dead lock waiting for a database connection and your application might stop working as the kernel has no thread available to dispatch and complete I/O operation.

Grrr what’s our solution? The first solution consists to carefully avoid blocking operations inside a completion handler, meaning any threads executing a kernel event must never block on something. I suspect this will be simple to achieve if you write an application from zero and you want to have a fully asynchronous application. Still, be careful and make sure you properly create enough threads. How you do that? Here is an example from Grizzly:

ThreadPoolExecutorServicePipeline executor = new ThreadPoolExecutorServicePipeline
AsynchronousChannelGroup asyncChannelGroup = AsynchronousChannelGroup

The second solution is to use a cached thread pool

Cached Thread Pool Configuration

An asynchronous channel group associated with a cached thread pool submits events to the thread pool that simply invoke the user’s completion handler. Internal kernel’s I/O operations are handled by one or more internal threads that are not visible to the user application. Yup! That means you have one hidden thread pool (not configurable via the official API, but as a system property) that dispatch events to a cached thread pool, which in turn invokes completion handler (Wait! you just win a price: a thread’s context switch for free ;-). Since this is a cached thread pool, the probability of suffering the hangs problem described above is lower. I’m not saying it cannot happens as you can always create cached thread pool that cannot grows infinitely (those infinite thread pool should have never existed anyway!). But at least with cached thread pool you are guarantee that the kernel will be able to complete its I/O operations (like reading bytes). Just the invocation of the completion handler might be delayed when all the threads are blocked. Note that a cached thread pool must support unbounded queueing to works properly. How you do set a cached thread pool? Here is an example from Grizzly:

ThreadPoolExecutorServicePipeline executor = new ThreadPoolExecutorServicePipeline
AsynchronousChannelGroup asyncChannelGroup = AsynchronousChannelGroup

What about the default that ship with the kernel?

If you do not create your own asynchronous channel group, then the kernel’s default group that has an associated thread pool will be created automatically. This thread pool is a hybrid of the above configurations. It is a cached thread pool that creates threads on demand, and it has N threads that dequeue events and dispatch directly to the application’s completion handler. The value of N defaults to the number of hardware threads but may be configured by a system property. In addition to N threads, there is one additional internal thread that dequeues events and submits tasks to the thread pool to invoke completion handlers. This internal thread ensures that the system doesn’t stall when all of the fixed threads are blocked, or otherwise busy, executing completion handlers.

What’s next

If you have one thing to learn from this part I is independently of which thread pool you decide to use (default, cached or fixed), make sure you at least limit blocking operations. This is specially true when a fixed thread pool is used. Now curious about which thread pool perform the best in Grizzly? Well, wait for the monster to be unleashed during Devoxx 2008! Finally, the AIO implementation in Grizzly can be browsed online (here, here, here and here) (or better, download the code and build on top of it) and our upcoming release, 1.9.0, will ship with two OSGi bundles (framework and http) to be embedded where the JDK 7 lives 🙂

P.S Some wording of this blog are from Alan Bateman’s note

_uacct = “UA-3111670-1”;


Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: