Archive for the ‘Uncategorized’ Category

PrimeFaces 4 released: What’s new with PrimeFaces Push

PrimeFaces 4 has been released. The PrimeFaces Push components has been updated to use some of the cool new features of Atmosphere 2.0!

To name a few:

  • Atmosphere’s Protocol now enabled by default. This allow an PrimeFaces application to receive events like: client disconnection, proxy disconnection, network outage, etc. It also make is pretty simple to enable the Atmosphere’s UUIDBroadcasterCache to prevent message’s lost.
  • Heart Beat: Atmosphere’s Hear Beat now turned on by default, preventing proxy/firewall to unexpectly close websockets or long polling connection.
  • Message Size Interceptor: Atmosphere’s tracking message length enabled by default. Client side callback will always be invoked only when the complete message has been received. This makes application’s logic much more simple.
  • Significant Performance improvement of the MetaBroadcaster
  • jsr356 supports.
  • All the new features that ships with Atmosphere 2.0!

For the next PrimeFaces release (4.1), we are working on a new component model based with injectable Push instance, which will makes it event more easier to write powerful websockets applications! with PrimeFaces.

To install Atmosphere and PrimeFaces, make sure you are using Atmosphere 2.0.2.


Categories: Uncategorized

The Atmosphere Framework 2.0 released

The Async-IO dev team are pleased to announce the release of The Atmosphere Framework 2.0! You can download it here

Besides all of the performance and cache memory improvements made between Atmosphere 1.0, version 2.0 comes with a host of improvements and new features. Probably the biggest improvement that our users will notice is much better memory usage under high load and improved WebSocket/fallbacks transport performance. This changes alone makes it worth upgrading!

Read the announcement here!

Categories: Uncategorized

wAsync: WebSockets with Fallbacks transports for Android, Node.js and Atmosphere

wAsync aims to make realtime apps possible in Android’s mobile device and Java/JVM clients communicating with framework like Node.js and Atmosphere, blurring the differences between the different transport mechanisms. Transports supported are WebSockets, Server Side Events, Long-Polling and Http Streaming. You can download all samples from wAsync’s main page.

wAsync is really simple to use. For this blog, let’s write a simple chat using Node.js. First, let’s define a super simple Node.js WebSockets server with static files support. I’m dropping the entire code here assumining you know how Node.js works.

Now let’s write a wAsync’s Android client that can communiate with this server. Graphically, it will looks like

Screen Shot 2013-04-04 at 10.49.26 AM

Pretty simple Android application

As with for the Node.js server, I assume you are familiar with Android development.

Now let’s explain the code. First, we create a new client (line 19)

Client client = ClientFactory.getDefault().newClient();

Next, let’s create a Request to send:

RequestBuilder request = client.newRequestBuilder()

This demo is using JSON, and we want to use Jackson for marshalling/unmarshalling Java POJO. So just we define a really simple Encoder:

.encoder(new Encoder<Data, String>() {
           public String encode(Data data) {
              try {
                return mapper.writeValueAsString(data)
              } catch (IOException e) {
                throw new RuntimeException(e);

This code will be invoked before the message is sent to Node.js and will covert the POJO to JSON. Next we define a decoder that will handle Node.js’s JSON response. We only decode messages, we aren’t interested by other events wAsync propagates:

.decoder(new Decoder<String, Data>() {
           public Data decode(Transport.EVENT_TYPE type, 
                              String data) {
             if (type.equals(Transport.EVENT_TYPE.MESSAGE)) {
                try {
                   return mapper.readValue(data, Data.class);
                } catch (IOException e) {
             return null;        

Next, we create our Socket to Node.js (line 49)

final org.atmosphere.wasync.Socket socket = client.create();

And define two functions, one for handling the message, one for handling unexpected exceptions, and finally open the Socket

socket.on("message", new Function<Data>() {
           public void on(final Data t) {
      Runnable() {
               public void run() {
                 Date d = new Date(t.getTime());
                 tv.append("Author " + t.getAuthor() + "@ " 
                   + d.getHours() + ":" + d.getMinutes() + ": "
                   + t.getMessage() + "\n");
            }).on(new Function<Throwable>() {
                public void on(Throwable t) {
                    tv.setText("ERROR 3: " + t.getMessage());

Finally, we just display what we are receiving on the Android screen:

  bt.setOnClickListener(new OnClickListener() {

           String name = null;
           public void onClick(View v) {
               try {
                 EditText et = (EditText) findViewById(;
                 String str = et.getText().toString();
                 if (name == null) {
                   name = str;
        Data(name, str));
                 Log.d("Client", "Client sent message");
               } catch (Throwable e) {
                 tv.setText("ERROR " + e.getMessage());

That’s all. If you want to chat between Android and a browser, you can install this HTML file, which will use atmosphere.js, the Atmosphere Framework’s client side to communicate with Node.js. You can use the Javascript WebSocket API directly as well, but with atmosphere.js, you get transports fallback support for free.

Now let’s write the same client, but this time using the Atmosphere Framework instead of Node.js. wAsync supports natively the Atmosphere protocol with fallback transports like long-polling, so we can tell wAsync to use long-polling in case the server doesn’t support websockets. For the server, instead of Node.js, we use the NettoSphere server.

The full code can be browsed here. The two differences are:

AtmosphereClient client =

Here we create a specialized client, which will allow use to set some Atmosphere’s protocol specific property:

RequestBuilder request = client.newRequestBuilder()

The track message lenght feature make sure two JSON messages are never delivered to the function as a single message. To supports that with Node.js, we would have needed to install Socket.IO.

What’s next with wAsync? Add support for both Socket.IO and SocksJs protocol. Want to contribute? Fork us! For more information, ping me on Twitter or follow the Atmosphere Framework!

The wAsync developpement is powered by, the company behind the Atmosphere Framework!

Safari’s WebSocket implementation and Java: Problematic!

The current Safari version (~5.1.5 … on OS X and iOS) implements an old version of the WebSockets specifications. This old version can  cause major issues with Java WebServer  in production. Here is some recommendations to workaround Safari. Important note: my observation are based on large deployments using the Atmosphere Framework.

First, let’s take a look at Java WebServers supporting WebSockets

WebServers Version Specification Safari Stability
Tomcat 7.0.27 and up hybi-13 and up NOT SUPPORTED
Jetty 7.0 to 7.4.5 Up to hybi-12 UNSTABLE: Server suffer High CPU when Safari’s WebSocket connection get closed.
Jetty 7.5.x to 7.6.2 Up to hybi-12 UNSTABLE: Server suffer High CPU when Safari’s WebSocket connection get closed.
Jetty 7.5.x to 7.6.2 Up to hybi-13 UNSTABLE: Server suffer High CPU when Safari’s WebSocket connection get closed.
Jetty 8.x to 8.1.2 Up to hybi-13 UNSTABLE: Server suffer High CPU when Safari’s WebSocket connection get closed.
Jetty 7.6.3 All hybi version STABLE
Jetty 8.1.3 All hybi version STABLE
GlassFish 3.1.1 All hybi version UNSTABLE: Suffer many API bugs
GlassFish 3.1.2 All hybi version STABLE
NettoSphere (based on Netty Framework) 1.x All hybi version STABLE

My recommendation is if you need to put a WebSocket application in production, use Jetty 7.6.3 or 8.1.3. GlassFish is also a good server but much more heavyweight to use if you are planning to write pure WebSocket applications. NettoSphere is fairly new and until Atmosphere 1.0.0 is released, I’m not yet recommending it (yet!). Note that the Netty Framework’s WebSocket implementation can be considered a STABLE as well, but to run Atmosphere on top of it you need NettoSphere.

Now if you can’t any of the stable WebServer, you can still use WebSockets. All you need to do is to write a Servlet’s Filter that will detect the WebSocket version and force Safari to downgrade to another “transport” or communication channel. Server Sides Events, long-polling, http streaming, polling or JSONP can then be used to reconnect. You just need to implement the reconnect inside your websocket#onClose function. With Atmosphere JQuery PlugIn, the reconnect is done transparently, e./g no special code needed. The Atmosphere Filter looks like:

    public void init(FilterConfig filterConfig) throws ServletException {
        String draft = filterConfig
        if (draft != null) {
            bannedVersion = draft.split(",");
            logger.debug("Blocked WebSocket Draft version {}", draft);

    public void doFilter(ServletRequest request, 
                         ServletResponse response, 
                         FilterChain chain) 
        throws IOException, ServletException {

        HttpServletRequest r = HttpServletRequest.class.cast(request);
        if (Utils.webSocketEnabled(r)) {
            int draft =r.getIntHeader("Sec-WebSocket-Version");
            if (draft < 0) {
                draft = r.getIntHeader("Sec-WebSocket-Draft");

            if (bannedVersion != null) {
                for (String s : bannedVersion) {
                    if (Integer.parseInt(s) == draft) {
                           .sendError(501, "Websocket protocol not supported"); 
        chain.doFilter(request, response);

So if you aren’t using the Atmosphere Framework, make sure you have some sort of Filter than will block Safari from creating problem.

If you are planning to use WebSocket and Java, I strongly recommend you look at Atmosphere instead of using private native API and get stuck on a server forever. For more information, ping me on Twitter!

Atmosphere .9 .9 .9 .9 released: Tomcat/GlassFish WebSocket, Netty Framework, Hazelcast, Fluid API, JQuery Optimization

The Atmosphere Framework version 0.9 has been released and contains a lot of new cool features and bugs fixes (around 101!!). This blog will describes the main new features! But first, let’s make it clear: the Atmosphere Framework has been designed to transparently support Comet and WebSocket with both client and server components. That means you don’t have to care if a server supports WebSocket or Comet, if a Browser supports WebSocket or not. You write your application and Atmosphere will pick the best transport for you, TRANSPARENTLY! As an example, all applications written with Atmosphere deployed on Jetty can now be deployed AS-IT-IS in Tomcat and automatically use the new Tomcat’s WebSocket API. Yes, you read it correctly: WITHOUT any changes!

Performance and Massive Scalability?

Atmosphere is live for 3 months of and and serves millions of requests using WebSocket, Long-Polling and JSONP => Atmosphere is production ready! And there is much more live Web Site using Atmosphere…I just can list them here.


Atmosphere is available in mostly all available framework, either as a plugin or extension. If your framework isn’t supporting Atmosphere, complains to them and tell them it is really simple to add support!

jQuery.atmosphere.js new API

The Atmosphere Javascript client has been rewritten in order to allow a better integration with the WebSocket API  and auto detection of the best transport to use depending of server capacity. For example, atmosphere.js always try to use WebSocket as the initial transport and will transparently negotiate with the server to see which transport to use. If the server isn’t supporting WebSocket, atmosphere.js will transparently use HTTP (long-polling, streaming or JSONP). On both client and server side, your application doesn’t need to implement anything in order to work. API wise, function support has been added to make it really easy to write Javascript client. New functions available are onOpen, onReconnect, onClose, onError. As an example, a chat client supporting WebSocket and Comet will only consist of:

    var socket = $.atmosphere;
    var request = { url: document.location.toString() + 'chat',
                    contentType : "application/json",
                    logLevel : 'debug',
                    transport : 'websocket' ,
                    fallbackTransport: 'long-polling'};

    request.onOpen = function(response) {

    request.onReconnect = function (request, response) {

    request.onMessage = function (response) {
        var message = response.responseBody;

    request.onError = function(response) {

    var subSocket = socket.subscribe(request);

A lot of improvements has been made to hide Browser specific implementation: the famous Internet Explorer works transparently and supported version are 6/7/8/9 and always pick the best transport.

Native Tomcat WebSocket

Starting with Tomat 7.0.27, Atmosphere will detect the new Tomcat WebSocket API and use it by default. That means atmosphere.js will negotiate the WebSocket protocol and use it. That also means as soon as you deploy your application from previous Tomcat version to 7.0.27, WebSocket will transparently used. As an example, the chat client described above will transparently communicates using WebSocket to its associated Server component

public class ResourceChat {

    public String suspend() {
        return "";

    @Broadcast(writeEntity = false)
    public Response broadcast(Message message) {
        return new Response(, message.message);

Now you can compare the number of line the Atmosphere Chat (support all transports) requires versus the Tomcat Chat (which, btw, only support WebSocket). Much more simpler with Atmosphere!

Netty Framework Support

YES, you read it correctly. Atmosphere has been refactored and can now be run on top of non Servlet Container! NettoSphere is the runtime that allow any existing Atmosphere application to run transparently on top of the Netty Framework. NettoSphere also support WebSocket and any existing applications will work without any change. As simple as

    Nettosphere server = new Nettosphere.Builder().config(
                 new Config.Builder()

GlassFish 3.1.2 WebSocket Support

The latest GlassFish release ships with an updated WebSocket implementation and Atmosphere makes use of it. As with Tomcat, Netty and Jetty, Atmosphere applications can be deployed without any modifications.

JAX-RS 2.0 Async API Support

The work in progress JAX-RS 2.0 specification introduces a new async API (which strangely looks like Atmosphere’s own API ;-)). The current incarnation of the API is really limited and I really hope the Spec EG will reconsider their decision. But whatever decision is made, Atmosphere supports the new annotation and the ExecutionContext class. The chat application would looks like

    // You can use that object to suspend as well.
    @Context ExecutionContext ctx;

    @Suspend() // Not the Atmosphere's Suspend
    public String suspend() {
        // ctx.suspend
        return "";

    @Broadcast(writeEntity = false)
    public Response broadcast(Message message) {
        return new Response(, message.message);

You can compare with the current JAX-RS samples Chat, that only support long-polling (much more complex). The JAX-RS Async API lack of listener is a major issue IMO. As an example, Atmosphere’s Suspend annotation suppors event listeners which can be used to track the current state of the connection. As an example, with the current incantation of JAX-RS Async API, an application cannot be notified when a client drop the connection and hence the application’s resources can never be cleaned. That can easily produces out of memory error. Another problem is with the JAX-RS Async API it is quite complex to implement http-streaming because some browsers required some padding data before the response (WebKit and IE). In conclusion: since Atmosphere runs transparently in all WebServer, there are no good reasons to move to JAX RS 2 Async API (which will only runs on top of Servlet 3.0 WebServer). Instead use the API with Atmosphere and get portability, WebSocket and transport negotiation.

Native WebSocket Application

Atmosphere supports native WebSocket development, e.g application that only support WebSocket. A simple pubsub WebSocket will looks like

public class WebSocketPubSub extends WebSocketHandler {

    private static final Logger logger = LoggerFactory.getLogger(WebSocketPubSub.class);

    public void onTextMessage(WebSocket webSocket, String message) {
        AtmosphereResource r = webSocket.resource();
        Broadcaster b = lookupBroadcaster(r.getRequest().getPathInfo());

        if (message != null && message.indexOf("message") != -1) {

    public void onOpen(WebSocket webSocket) {
        // Accept the handshake by suspending the response.
        AtmosphereResource r = webSocket.resource();
        Broadcaster b = lookupBroadcaster(r.getRequest().getPathInfo());
        r.addEventListener(new WebSocketEventListenerAdapter());

. The client would consists of

var request = { url : document.location.toString() };

request.onMessage = function (response) {
    if (response.status == 200) {
        var data = response.responseBody;
        if (data.length > 0) {
            // print the message
subSocket = socket.subscribe(request);

You can also use the WebSocket API directly, but atmosphere.js does a lot more (reconnection, proper API use, etc.)

Hazelcast support

Atmosphere can now be clustered/clouded using Hazelcast. As simple as:

public class JQueryPubSub {

    private @PathParam("topic") HazelcastBroadcaster topic;

    public SuspendResponse subscribe() {
        return new SuspendResponse.SuspendResponseBuilder()
                .addListener(new EventsLogger())

    public Broadcastable publish(@FormParam("message") String message) {
        return new Broadcastable(message, "", topic);

The broadcast operations will be distributed to all Hazelcast server available. You can masively scale your WebSocket application by just using the Hazelcast plugin! Don’t like Hazelcast? You can do the same using Redis/JMS/XMPP or JGroup broadcaster as well.


Tons of new features and improvements (101 issues fixed). Amongst them

1.0.0: Coming SOON, SwaggerSocket around the corner!

Atmosphere is fully powered by Wordnik and we are working hard to make the 1.0.0 release happens soon (End of May 2012). At Wordnik we are using Atmosphere a lot and soon we will release our REST over WebSocket protocol (called SwaggerSocket, which integrate with Swagger). New features will includes an Atmosphere Java Client that will mimic the Javascript functionality, Terracotta supports, etc (see a detailed list here).

For any questions or to download Atmosphere Client and Server Framework, go to our main site, use our Google Group forumfollow the team or myself and tweet your questions there! .

Async Http Client 1.2.0 released

Read the official announcement including the changes log

Thanks to every body that contributed to that amazing release.

For any questions you can use our Google Group, on #asynchttpclient or use Twitter to reach me! You can checkout the code on Github, download the jars from Maven Central or use Maven:


Categories: Uncategorized

Leaving Ning

Today I’ve officially resigned from Ning. Ning was a wonderful place to work, but I wanted to spend more time with my three little monsters and avoid traveling to California so often. I’ve never worked in a team of skilled architects like that and I will miss all the learning I was doing every day. Thanks to all of you and I’m sure Ning will be a great success!! I hope we all keep in touch!

What I’m gonna do? My quest for my next company start this week … things may move quite fast as you all know :-) For sure I will take a couple of weeks off as I’ve left Sun on a Friday and the next Monday I was at Ning. Bad idea but I was so trilled to join a great place like Ning!

I will not disappear completely as I can’t stop improving my Atmosphere Framework and support our growwwwwing I will not be 100% on vacation. I also want to explore Akka as this project is so interesting and the community there is just awesome, and i can be dangerous as I have commit access :-)! I will also continue to actively work on the Async Http Client…actually I will spend more time on it now!!.

So, Just follow me on Twitter for summer news :-).

Categories: Uncategorized

The new Atlassian JIRA Studio Activity Bar is powered by the Atmosphere Framework

Last week Atlassian released their new JIRA Studio, which is a hosted software development suite that supports every role of a high-performing development team throughevery stage of your development process.. One of the new feature is called the Activity Bar and it is powered by Atmosphere!

Atmosphere is hidding inside the  “Recent Issues” window above :-). You can read the official annoucement here and why Atmosphere was choosen instead of Servlet 3.0 or private implementation like Jetty and Grizzly.

Technically, the Activity Bar is using the atmosphere-jersey module as it makes it quite simple to write asynchronous applications: only five annotations to learn: @Suspend, @Resume, @Broadcast, @Schedule and @Cluster. As the name implies, this module build around the powerful Project Jersey and atmosphere-runtime, which is our portable layer on top of all private/non portable asynchronous Java API. The Activity Bar is also using the atmosphere-guice module (an improved version of), which bring the power of Google Guice to Atmosphere. Finally, the Activity Bar is using an API called AtmosphereResourceEventListener, which is extremely useful when writing asynchronous application. Why? Because when you suspend connections, you want to make sure all the resources associated with your connections get cleaned if the remote browser close the connection. Without such API, your application can easily produces OOM as some resources may stay associated with dead connections forever. Always think about that before choosing a Comet framework!

Wow! It has been a pleasure to work with Richard Wallace and his team. Since we are using JIRA @ Ning, I should soon work with my own framework!

For any questions or to download Atmosphere, go to our main site and use our Nabble forum (no subscription needed) or follow the team or myself and tweet your questions there! Or download our latest presentation to get an overview of what the framework is.


Categories: Uncategorized

@ MUST \!A = 4\pi r^2 : Atmosphere 0.5 is released

Atmosphere 0.5 is released. This release includes many new features like Guice Support, asynchronous request processing, JMS support, JQuery support, Events listeners, etc..


The community around Atmosphere is shaping and we have received many good feedback, which is reflected by the new features added:

Guice Support: Now quite simple to integrate with Google Guice using the new AtmosphrereGuiceServlet.

As simple a:

    protected void configureServlets(){
        Map params = new HashMap();
        params.put(PackagesResourceConfig.PROPERTY_PACKAGES, "org.atmosphere.samples.guice");
        serve("/chat*").with(AtmosphereGuiceServlet.class, params);

New Broadcaster: Easy Asynchronous HTTP Request Processing

The new set of Broadcaster allow you to do full asynchronous http processing, similar to what RESTEasy offer:

public class SimpleResource
   public Broadcastable getBasic(final @PathParam("topic") JerseySimpleBroadcaster broadcaster)                    throws Exception
      Thread t = new Thread()
         public void run()
               Response jaxrs = Response.ok("basic").type(MediaType.TEXT_PLAIN).build();
            catch (Exception e)
      return new Broadcastable(broadcaster);

New AtmosphereResourceEventListener

You can now listen to events like resume, client disconnections and broadcast.

    public Broadcastable subscribe() {
        return broadcast("OK");


public class EventsLogger implements AtmosphereResourceEventListener {

    public EventsLogger() {

    public void onResume(AtmosphereResourceEvent event) {

    public void onDisconnect(AtmosphereResourceEvent event) {

    public void onBroadcast(AtmosphereResourceEventevent) {

The Meteor now supports scheduled and delayed events broadcasting

If you want to use Atmosphere without re-writing your Servlet based application (or don’t want to use the Servlet 3.0 Async API), you can use a Meteor instead and do all the same operation as when you use the annotations:

    public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException {
        Meteor m =, list, null);

        // Log all events on the concole.
        m.addListener(new EventsLogger());

        // Periodic events broadcast
        m.schedule("Atmosphere is cool", 10, TimeUnit.SECONDS);

Build in Framework support for different browser behaviors and proxy

The Framework is now adding the proper headers required to make it work with any proxy. The framework also make sure all browsers work out of the box by writing some default comments when a response get suspended so browser like Chome and Webkit works properly. The framework is also taking care of errant connections when hot deployment happens.

Improved algorithm for WebServer detection like App Engine, Tomcat, Jetty and GlassFish

The Framework is auto detecting which WebServer is running on in a more efficient way. The behaviour is also configurable by adding, in web.xml:

  <param-value>AN EXISTING CometSupport implementation, or a customized one</param-value>  

JMS support

You can now cluster your application using JMS, Shoal or JGroups

    public Broadcastable publish(@FormParam("message") String message){
        return broadcast(message);

Annotation improvements

All annotations has been reviewed and some cool functionality added like resumeOnBroadcast, event listeners, etc. See the API for more information.

JQuery Support

See how simple it can be here.

Better Control of BroadcastFilter state, allowing easier agreggation/filtering/runing of events

If your application extensively broadcast events, it is important to pick up the right strategy to make sure all events reach your clients as fast as possible, even under high load. BroadcastAction has been added to make the writing BroadcastFilter’s aggreagation/filtering/pruning of events really simple.  Simple as

    public BroadcastAction filter(Object message) {
        if (message instanceof String){
            if (bufferedMessage.length() < maxBufferedString){
                return new BroadcastAction(ACTION.ABORT,message);
            } else {
                message = bufferedMessage.toString();
                bufferedMessage = new StringBuilder();
                return new BroadcastAction(ACTION.CONTINUE,message);
        } else {
            return new BroadcastAction(message);

Powerful PubSub sample

Take a look at our PubSub sample that demonstrate all the power of Atmosphere!!! We also improved our white paper (PDF).


As usual, Thanks to Matthias (ADF Faces), Paul (Jersey), Viktor (Akka), Hubert (Grizzly) and Catagay (PrimeFaces) and all the users for their contributions to Atmosphere! Special Thanks to Ning for allowing me to work on this project (and Thanks to Sun before Ning!).

For any questions or to download Atmosphere, go to our main site and use our Nabble forum (no subscription needed) or follow us on Twitter and tweet your questions there!

Categories: Uncategorized

Putting GlassFish v3 in Production: Essential Surviving Guide

November 27, 2009 6 comments

On December 10, GlassFish v3 GA will spread the world. As you are aware, the marketing vehicle for this release will be Java EE 6 and the fact that GlassFish is now a full OSGi runtime/container!!! Both are innovative technology, but they will not save your life once you put GlassFish in production hence this survival guide :-). At the end, once your OSGi/EE 6 application is ready, you still want to have the same great performance you’ve got with GlassFish v2. This blog will gives some hints about how to configure and prepare GlassFish v3 for production use.


New Architecture

With v3, the Grizzly Web Framework role has significantly increased if you compare with v2. In v2, its role was to serve HTTP requests in front of the Tomcat based WebContainer. In v3, Grizzly is used as an extensible micro-kernel which handle almost all real time operations including dispatching HTTP requests to the Grizzly’s Web based extensions (Static Resource Container, Servlet Container, JRuby Container, Python Container, Grails Container), Administrative extensions (Java WebStart support, Admin CLI), WebService extension (EJB) and Monitoring/Management REST extensions.


At runtime, Grizzly will do the following


If you are familiar with Grizzly’s internals


As you can see, it is critical to properly configure GlassFish in order to get the expected performance for your application and GlassFish in general.

Debugging GlassFish

Before jumping into the details, I recommend you always run GlassFish using the following property, which display in the server log the Grizzly internal configuration for both the NIO and Web Framework

-Dcom.sun.grizzly.displayConfiguration=true or

If you need to see what Grizzly is doing under the hood like the request header received, the response written etc. you may want to turn on snoop so you don’t need to use Wireshark or ngrep

-Dcom.sun.grizzly.enableSnoop=true or 

Note that if you enable that mechanism, the performance will drop significantly so use it only for debugging purpose.

Configuring the VM

Makes sure you remove in domain.xml the following jvm-options:

-Xmx512 -client

and replace it with

-server -XX:+AggressiveHeap -Xmx3500m -Xms3500m -Xss128k

For anything other than Solaris/SPARC, 3500m needs to be 1400m. On a multi-CPU machine, add:

-XX:ParallelGCThreads=N -XX:+UseParallelOldGC

where N is the number of CPUs if < 8 (so really, you can leave it out altogether in that case) and N = number of CPUs / 2 otherwise. On a Niagara, add:


You can also install the 64-bit JVM and use


with JDK 6u16 and later. A 64-bit JVM with


will allow you to specify larger Java heaps, especially useful on Windows x64, where you are limited to about


of max Java heap. Note that a 64-bit JVM will mean you'll need to be running a 64-bit operating system. That's not an issue with Solaris. Many people who run Linux only run the 32-bit version of Linux. And, for Windows users, they'll need a 64-bit Windows in order to use a 64-bit Windows JVM. A 64-bit JVM with -XX:+UseCompressedOops will give you larger Java heaps with 32-bit performance. 64-bit JVMs also provides additional CPU registers to be available on Intel/AMD platforms.

Configuring the Thread Pool

Make sure you take a look at "what changed" since v2 and how you can properly configure Grizzly in v3. The one you should care are acceptors-thread


and the number of worker threads


The recommended value for acceptors-thread should be the number of core/processor available on the machine you deploy on. I recommend you always run sanity performance test using the default value (1) and with the number of core just to make sure. Next is to decide the number of threads required per HTTP port. With GlassFish v2, the thread pool configuration was shared amongst all HTTP port, which was problematic, as some port/listener didn't needed to have that many threads as port 8080. We fixed that in v3 so you can configure the thread pool per listener. Now the ideal value for GlassFish v3 should always be between 20 and 500 maximum as Grizzly use an non blocking I/O strategy under the hood, and you don't need as many threads as if you were using a blocking I/O server like Tomcat. Here I can't recommend a specific number, it is always based on what your application is doing. For example, if you do a lot of database query, you may want to have a higher number of threads just in case the connection pool/jdbc locks on a database, and "waste" threads until they unlock. In GlassFish v2, we did see a lot of applications that were hanging because all the worker threads were locked by the connection-pool/jdbc. The good things in v3 is those "wasted" threads will eventually times out, something that wasn't available with v2. The default value is 5 minutes, and this is configurable


I/O strategy and buffer configuration

In terms of buffers used by Grizzly to read and write I/O operations, the default (8192) should be the right value but you can always play with the number


If your application is doing a lot of I/O operations like write, you can also tell Grizzly to use an asynchronous strategy


When this strategy is used, all I/O write will be executed using a dedicated thread, freeing the worker thread that executed the operation. Again, it could make a big differences. An alternative that could be considered also is if you are noticing that some write operations seems to takes more time than expected. You may try to increase the pool of "write processor" by increasing the number of NIO Selector:


Make sure this number is never smaller than the number of worker thread as it will gives disastrous performance result. You should increase that number if you application use the new Servlet 3.0 Async API, the Grizzly Comet Framework or Atmosphere (recommended). When asynchronous API are used, GlassFish will needs more "write processor" than without

Let Grizzly magically configure itself

Now Grizzly supports two "unsupported" properties in GlassFish that can always be used to auto configure GlassFish by itself. Those properties may or may not make a difference, but you can always try them with and without your configuration. The first one will configure for you the buffers, acceptor-threads and worker threads:


The second one will tell Grizzly to change its threading strategy to leader/follower


It may or may not make a difference, but worth the try. You can also force Grizzly to terminates all its I/O operations using a dedicated thread


It may makes a difference if you application do a small amount of I/O operations under load.

Cache your static resources!

Now by default, the Grizzly HTTP File Caching is turned off. To get decent static resources performance, I strongly recommend you turn it on (it makes a gigantic difference)


Only for JDK 7

Now, if you are planning to use JDK 7, I recommend you switch Grizzly ByteBuffer strategy and allocate memory outside the VM heap by using direct byte buffer


Only on JDK 7 as with JDK 6, using allocating heap byte buffer gives better performance than native. Now if you realize GlassFish is allocate too much native memory, just add


That should reduce the native memory usage.

WAP and Slow Network

If your application will be used by Phone using the WP protocol or if from slow network, you may configure extends the default time out when Grizzly is executing I/O operations:

-Dcom.sun.grizzly.readTimeout or

for read, and

com.sun.grizzly.writeTimeout or

for write. The default for both is 30 seconds. That means Grizzly will wait 30 seconds for incoming bytes to comes from the client when processing a request, and 30 seconds when writing bytes back to the client before closing the connection. On slow network, 30 seconds for executing the read operations may not be enough and some client may not be able to execute requests. But be extra careful when changing the default value as if the value is too high, a worker thread will be blocked waiting for bytes and you may end up running out of worker threads. Note to say that a malicious client may produce a denial-of-service by sending bytes very slowly. It may takes as 5 minutes (see the thread times out config above) before Grizzly will reclaims the worker threads. If you experience write times out, e.g the remote client is not reading the bytes the server is sending, you may also increase the value but instead I would recommend you enable the async I/O strategy described above to avoid locking worker thread.

Configuring the keep alive mechanism a-la-Tomcat

The strategy Grizzly is using for keeping a remote connection open is by pooling the file descriptor associated with the connection, and when an I/O operations occurs, get the file description from the pool and spaw a thread to execute the I/O operation. As you can see, Grizzly isn't blocking a Thread who waits for I/O operation (the next client's request). Tomcat strategy is different, e.g when Tomcat process requests, a dedicated thread will block between the client requests for maximum 30 seconds. This gives really good throughput performance but it doesn't scale very well, as you need one thread per connection. But if your OS have tons of Threads available, you can always configure Grizzly to use a similar strategy:


Tomcat also have an algorithm that will reduce the waiting time a Thread can block waiting for the new I/O operations, so under load threads don't get blocked too long, giving the change to other requests to execute. You can enable a similar algorithm with Grizzly:


Depending on what your application is doing, you may get nice performance improvement by enabling those properties.

Ask questions!

As I described here, I will no longer work on GlassFish by the time you read this blog, so make sure you ask your questions using the GlassFish mailing list ( or you can always follow me on Twitter! and ask the question there!

var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "' type='text/javascript'%3E%3C/script%3E"));

var pageTracker = _gat._getTracker("UA-3111670-3");

Categories: Uncategorized

Get every new post delivered to your Inbox.

Join 50 other followers