Archive

Archive for November, 2009

Putting GlassFish v3 in Production: Essential Surviving Guide

November 27, 2009 6 comments

On December 10, GlassFish v3 GA will spread the world. As you are aware, the marketing vehicle for this release will be Java EE 6 and the fact that GlassFish is now a full OSGi runtime/container!!! Both are innovative technology, but they will not save your life once you put GlassFish in production hence this survival guide :-). At the end, once your OSGi/EE 6 application is ready, you still want to have the same great performance you’ve got with GlassFish v2. This blog will gives some hints about how to configure and prepare GlassFish v3 for production use.

v3runtime.png

New Architecture

With v3, the Grizzly Web Framework role has significantly increased if you compare with v2. In v2, its role was to serve HTTP requests in front of the Tomcat based WebContainer. In v3, Grizzly is used as an extensible micro-kernel which handle almost all real time operations including dispatching HTTP requests to the Grizzly’s Web based extensions (Static Resource Container, Servlet Container, JRuby Container, Python Container, Grails Container), Administrative extensions (Java WebStart support, Admin CLI), WebService extension (EJB) and Monitoring/Management REST extensions.

v3-diagram.pgn

At runtime, Grizzly will do the following

v3runtime.png

If you are familiar with Grizzly’s internals

v3runtime.png

As you can see, it is critical to properly configure GlassFish in order to get the expected performance for your application and GlassFish in general.

Debugging GlassFish

Before jumping into the details, I recommend you always run GlassFish using the following property, which display in the server log the Grizzly internal configuration for both the NIO and Web Framework

-Dcom.sun.grizzly.displayConfiguration=true or
network-config>network-listeners>network-listener>transport#display-configuration

If you need to see what Grizzly is doing under the hood like the request header received, the response written etc. you may want to turn on snoop so you don’t need to use Wireshark or ngrep

-Dcom.sun.grizzly.enableSnoop=true or 
network-config>network-listeners>network-listener>transport#enable-snoop 

Note that if you enable that mechanism, the performance will drop significantly so use it only for debugging purpose.

Configuring the VM

Makes sure you remove in domain.xml the following jvm-options:

-Xmx512 -client

and replace it with

-server -XX:+AggressiveHeap -Xmx3500m -Xms3500m -Xss128k
-XX:+DisableExplicitGC

For anything other than Solaris/SPARC, 3500m needs to be 1400m. On a multi-CPU machine, add:

-XX:ParallelGCThreads=N -XX:+UseParallelOldGC

where N is the number of CPUs if < 8 (so really, you can leave it out altogether in that case) and N = number of CPUs / 2 otherwise. On a Niagara, add:

-XX:LargePageSizeInBytes=256m

You can also install the 64-bit JVM and use

-XX:+UseCompressedOops

with JDK 6u16 and later. A 64-bit JVM with

-XX:+UseCompressedOops

will allow you to specify larger Java heaps, especially useful on Windows x64, where you are limited to about

-Xmx1400m

of max Java heap. Note that a 64-bit JVM will mean you'll need to be running a 64-bit operating system. That's not an issue with Solaris. Many people who run Linux only run the 32-bit version of Linux. And, for Windows users, they'll need a 64-bit Windows in order to use a 64-bit Windows JVM. A 64-bit JVM with -XX:+UseCompressedOops will give you larger Java heaps with 32-bit performance. 64-bit JVMs also provides additional CPU registers to be available on Intel/AMD platforms.

Configuring the Thread Pool

Make sure you take a look at "what changed" since v2 and how you can properly configure Grizzly in v3. The one you should care are acceptors-thread

network-config>transports>transport>tcp#acceptor-threads

and the number of worker threads

network-config>thread-pools>http-threadpool

The recommended value for acceptors-thread should be the number of core/processor available on the machine you deploy on. I recommend you always run sanity performance test using the default value (1) and with the number of core just to make sure. Next is to decide the number of threads required per HTTP port. With GlassFish v2, the thread pool configuration was shared amongst all HTTP port, which was problematic, as some port/listener didn't needed to have that many threads as port 8080. We fixed that in v3 so you can configure the thread pool per listener. Now the ideal value for GlassFish v3 should always be between 20 and 500 maximum as Grizzly use an non blocking I/O strategy under the hood, and you don't need as many threads as if you were using a blocking I/O server like Tomcat. Here I can't recommend a specific number, it is always based on what your application is doing. For example, if you do a lot of database query, you may want to have a higher number of threads just in case the connection pool/jdbc locks on a database, and "waste" threads until they unlock. In GlassFish v2, we did see a lot of applications that were hanging because all the worker threads were locked by the connection-pool/jdbc. The good things in v3 is those "wasted" threads will eventually times out, something that wasn't available with v2. The default value is 5 minutes, and this is configurable

configs.config.server-config.thread-pools.thread-pool.http-thread-pool.idle-thread-timeout-seconds

I/O strategy and buffer configuration

In terms of buffers used by Grizzly to read and write I/O operations, the default (8192) should be the right value but you can always play with the number

network-config>protocols>protocol>http#header-buffer-length-byte
network-config>protocols>protocol>http#send-buffer-size

If your application is doing a lot of I/O operations like write, you can also tell Grizzly to use an asynchronous strategy

-Dcom.sun.grizzly.http.asyncwrite.enabled=true

When this strategy is used, all I/O write will be executed using a dedicated thread, freeing the worker thread that executed the operation. Again, it could make a big differences. An alternative that could be considered also is if you are noticing that some write operations seems to takes more time than expected. You may try to increase the pool of "write processor" by increasing the number of NIO Selector:

-Dcom.sun.grizzly.maxSelectors=XXX

Make sure this number is never smaller than the number of worker thread as it will gives disastrous performance result. You should increase that number if you application use the new Servlet 3.0 Async API, the Grizzly Comet Framework or Atmosphere (recommended). When asynchronous API are used, GlassFish will needs more "write processor" than without

Let Grizzly magically configure itself

Now Grizzly supports two "unsupported" properties in GlassFish that can always be used to auto configure GlassFish by itself. Those properties may or may not make a difference, but you can always try them with and without your configuration. The first one will configure for you the buffers, acceptor-threads and worker threads:

-Dcom.sun.grizzly.autoConfigure=true

The second one will tell Grizzly to change its threading strategy to leader/follower

-Dcom.sun.grizzly.useLeaderFollower=true

It may or may not make a difference, but worth the try. You can also force Grizzly to terminates all its I/O operations using a dedicated thread

com.sun.grizzly.finishIOUsingCurrentThread=false

It may makes a difference if you application do a small amount of I/O operations under load.

Cache your static resources!

Now by default, the Grizzly HTTP File Caching is turned off. To get decent static resources performance, I strongly recommend you turn it on (it makes a gigantic difference)

network-config>protocols>protocol>http>file-cache#enabled

Only for JDK 7

Now, if you are planning to use JDK 7, I recommend you switch Grizzly ByteBuffer strategy and allocate memory outside the VM heap by using direct byte buffer

-Dcom.sun.grizzly.useDirectByteBuffer=true

Only on JDK 7 as with JDK 6, using allocating heap byte buffer gives better performance than native. Now if you realize GlassFish is allocate too much native memory, just add

-Dcom.sun.grizzly.useByteBufferView=false

That should reduce the native memory usage.

WAP and Slow Network

If your application will be used by Phone using the WP protocol or if from slow network, you may configure extends the default time out when Grizzly is executing I/O operations:

-Dcom.sun.grizzly.readTimeout or
network-config>network-listeners>network-listener>transport#read-timeout

for read, and

com.sun.grizzly.writeTimeout or
network-config>network-listeners>network-listener>transport#write-timeout

for write. The default for both is 30 seconds. That means Grizzly will wait 30 seconds for incoming bytes to comes from the client when processing a request, and 30 seconds when writing bytes back to the client before closing the connection. On slow network, 30 seconds for executing the read operations may not be enough and some client may not be able to execute requests. But be extra careful when changing the default value as if the value is too high, a worker thread will be blocked waiting for bytes and you may end up running out of worker threads. Note to say that a malicious client may produce a denial-of-service by sending bytes very slowly. It may takes as 5 minutes (see the thread times out config above) before Grizzly will reclaims the worker threads. If you experience write times out, e.g the remote client is not reading the bytes the server is sending, you may also increase the value but instead I would recommend you enable the async I/O strategy described above to avoid locking worker thread.

Configuring the keep alive mechanism a-la-Tomcat

The strategy Grizzly is using for keeping a remote connection open is by pooling the file descriptor associated with the connection, and when an I/O operations occurs, get the file description from the pool and spaw a thread to execute the I/O operation. As you can see, Grizzly isn't blocking a Thread who waits for I/O operation (the next client's request). Tomcat strategy is different, e.g when Tomcat process requests, a dedicated thread will block between the client requests for maximum 30 seconds. This gives really good throughput performance but it doesn't scale very well, as you need one thread per connection. But if your OS have tons of Threads available, you can always configure Grizzly to use a similar strategy:

-Dcom.sun.grizzly.keepAliveLockingThread=true

Tomcat also have an algorithm that will reduce the waiting time a Thread can block waiting for the new I/O operations, so under load threads don't get blocked too long, giving the change to other requests to execute. You can enable a similar algorithm with Grizzly:

-Dcom.sun.grizzly.useKeepAliveAlgorithm=true

Depending on what your application is doing, you may get nice performance improvement by enabling those properties.

Ask questions!

As I described here, I will no longer work on GlassFish by the time you read this blog, so make sure you ask your questions using the GlassFish mailing list (users@glassfish.dev.java.net) or you can always follow me on Twitter! and ask the question there!

var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));

var pageTracker = _gat._getTracker("UA-3111670-3");
pageTracker._initData();
pageTracker._trackPageview();

Categories: Uncategorized

Leaving Sun Microsystems

It is always hard to write these type of posts. As of December 4th, I will no longer be with Sun Microsystems.

IMG_0159.JPG

It all started from working on Java EE1.3 and a server called Tomcat. I was on Tomcat for a couple of years and then came with the idea of a NIO based HTTP Connector for Tomcat called … Grizzly :-) Funny it never ended into Tomcat! Grizzly started with SJS AS 8.0 and slowly replaced the old Netscape C Runtime and Tomcat inside Sun’s products (they are many many now :-)). I’ve then moved to a Project called Minnow, a components based server running on top of Grizzly and Maven 2. You start Grizzly and Grizzly was, at runtime, taking care of downloading/installing the artifacts needed to serve the request: Containers installed on the fly! The project got canned as soon as I’ve presented it internally …I’ve always had trouble inside Sun with my projects :-) … but it has opened the door to GlassFish v3 as the code got re-used to create the Grizzly based micro-kernel of the current GlassFish v3. Didn’t wasted my time finally :-) During that time GlassFish moved from being a Zero to a Hero, and now it is just amazing to see where GlassFish is and the perception the community have of it. My fingers hurt when I look at the emails traffics we have generated on users@glassfish! Finally the “Comet things” surrounded me and at the end I’ve created the Atmosphere Framework, which is positively invading the planet those days :-).

I will really miss my team I’ve been working for the last 7 years…..Now the sad news: I will stop working on both GlassFish and Grizzly on December 4, letting something I’ve created growing by itself. But the Grizzly community is quite mature and I’m fully confident to see amazing release in the future! BTW, since I am privately getting up to 30 emails per weeks from early adopter or existing GlassFish users, please make sure you either ping Sun’s support directly or use the Grizzly/GlassFish public mailing list to get the appropriate response starting now :-)

What about Atmosphere? This project is way too innovative to leave it and I will continue working on it or on something similar, depending on what Sun is up to :-).

Where do I go? I’m going to Ning.com. Don’t worries I will continue polluting this blog and worse you can always follow me on Twitter!

var gaJsHost = ((“https:” == document.location.protocol) ? “https://ssl.&#8221; : “http://www.&#8221;);
document.write(unescape(“%3Cscript src=’” + gaJsHost + “google-analytics.com/ga.js’ type=’text/javascript’%3E%3C/script%3E”));

var pageTracker = _gat._getTracker(“UA-3111670-3″);
pageTracker._initData();
pageTracker._trackPageview();

Categories: Uncategorized

Servlet 3.0 Asynchronous API or Atmosphere? Easy decision!

November 6, 2009 14 comments

One the comment I’m getting about Atmosphere is why should I use the framework instead of waiting for Servlet 3.0 Async API. Well, it simple: much simpler, works with any existing Java WebServer (including Google App Engine!), and will auto-detect the Servlet 3.0 Async API if you deploy your application on a WebServer that support it.

IMG_0159.JPG

To make a fair comparison, let’s write the hello world of Comet, a Chat application and compare the server side code. Without technical details, let’s just drop the entire server code. First, the Servlet 3.0 version (can probably be optimized a little):

  1 package web.servlet.async_request_war;
  2 
  3 import java.io.IOException;
  4 import java.io.PrintWriter;
  5 import java.util.Queue;
  6 import java.util.concurrent.ConcurrentLinkedQueue;
  7 import java.util.concurrent.BlockingQueue;
  8 import java.util.concurrent.LinkedBlockingQueue;
  9 
 10 import javax.servlet.AsyncContext;
 11 import javax.servlet.AsyncEvent;
 12 import javax.servlet.AsyncListener;
 13 import javax.servlet.ServletConfig;
 14 import javax.servlet.ServletException;
 15 import javax.servlet.annotation.WebServlet;
 16 import javax.servlet.http.HttpServlet;
 17 import javax.servlet.http.HttpServletRequest;
 18 import javax.servlet.http.HttpServletResponse;
 19 
 20 @WebServlet(urlPatterns = {"/chat"}, asyncSupported = true)
 21 public class AjaxCometServlet extends HttpServlet {
 22 
 23     private static final Queue<AsyncContext> queue = new ConcurrentLinkedQueue<AsyncContext>();
 24     private static final BlockingQueue<String> messageQueue = new LinkedBlockingQueue<String>();
 25     private static final String BEGIN_SCRIPT_TAG = "<script type='text/javascript'>\n";
 26     private static final String END_SCRIPT_TAG = "</script>\n";
 27     private static final long serialVersionUID = -2919167206889576860L;
 28 
 29     private Thread notifierThread = null;
 30 
 31     @Override
 32     public void init(ServletConfig config) throws ServletException {
 33         Runnable notifierRunnable = new Runnable() {
 34             public void run() {
 35                 boolean done = false;
 36                 while (!done) {
 37                     String cMessage = null;
 38                     try {
 39                         cMessage = messageQueue.take();
 40                         for (AsyncContext ac : queue) {
 41                             try {
 42                                 PrintWriter acWriter = ac.getResponse().getWriter();
 43                                 acWriter.println(cMessage);
 44                                 acWriter.flush();
 45                             } catch(IOException ex) {
 46                                 System.out.println(ex);
 47                                 queue.remove(ac);
 48                             }
 49                         }
 50                     } catch(InterruptedException iex) {
 51                         done = true;
 52                         System.out.println(iex);
 53                     }
 54                 }
 55             }
 56         };
 57         notifierThread = new Thread(notifierRunnable);
 58         notifierThread.start();
 59     }
 60 
 61     @Override
 62     protected void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
 63         res.setContentType("text/html");
 64         res.setHeader("Cache-Control", "private");
 65         res.setHeader("Pragma", "no-cache");
 66         
 67         PrintWriter writer = res.getWriter();
 68         // for IE
 69         writer.println("<!-- Comet is a programming technique that enables web servers to send data to the client without having any need for the client to request it. -->\
    n");
 70         writer.flush();
 71 
 72         req.setAsyncTimeout(10 * 60 * 1000);
 73         final AsyncContext ac = req.startAsync();
 74         queue.add(ac);
 75         req.addAsyncListener(new AsyncListener() {
 76             public void onComplete(AsyncEvent event) throws IOException {
 77                 queue.remove(ac);
 78             }
 79 
 80             public void onTimeout(AsyncEvent event) throws IOException {
 81                 queue.remove(ac);
 82             }
 83         });
 84     }
 85 
 86     @Override
 87     @SuppressWarnings("unchecked")
 88     protected void doPost(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
 89         res.setContentType("text/plain");
 90         res.setHeader("Cache-Control", "private");
 91         res.setHeader("Pragma", "no-cache");
 92
 93         req.setCharacterEncoding("UTF-8");
 94         String action = req.getParameter("action");
 95         String name = req.getParameter("name");
 96 
 97         if ("login".equals(action)) {
 98             String cMessage = BEGIN_SCRIPT_TAG + toJsonp("System Message", name + " has joined.") + END_SCRIPT_TAG;
 99             notify(cMessage);
100 
101             res.getWriter().println("success");
102         } else if ("post".equals(action)) {
103             String message = req.getParameter("message");
104             String cMessage = BEGIN_SCRIPT_TAG + toJsonp(name, message) + END_SCRIPT_TAG;
105             notify(cMessage);
106 
107             res.getWriter().println("success");
108         } else {
109             res.sendError(422, "Unprocessable Entity");
110         }
111     }
112 
113     @Override
114     public void destroy() {
115         queue.clear();
116         notifierThread.interrupt();
117     }
118 
119     private void notify(String cMessage) throws IOException {
120         try {
121             messageQueue.put(cMessage);
122         } catch(Exception ex) {
123             throw new IOException(ex);
124         }
125     }
126 
127     private String escape(String orig) {
128         StringBuffer buffer = new StringBuffer(orig.length());
129 
130         for (int i = 0; i < orig.length(); i++) {
131             char c = orig.charAt(i);
132             switch (c) {
133             case '\b':
134                 buffer.append("\\b");
135                 break;
136             case '\f':
137                 buffer.append("\\f");
138                 break;
139             case '\n':
140                 buffer.append("<br />");
141                 break;
142             case '\r':
143                 // ignore
144                 break;
145             case '\t':
146                 buffer.append("\\t");
147                 break;
148             case '\'':
149                 buffer.append("\\'");
150                 break;
151             case '\"':
152                 buffer.append("\\\"");
153                 break;
154             case '\\':
155                 buffer.append("\\\\");
156                 break;
157             case '<':
158                 buffer.append("<");
159                 break;
160             case '>':
161                 buffer.append(">");
162                 break;
163             case '&':
164                 buffer.append("&");
165                 break;
166             default:
167                 buffer.append(c);
168             }
169         }
170 
171         return buffer.toString();
172     }
173 
174     private String toJsonp(String name, String message) {
175         return "window.parent.app.update({ name: \"" + escape(name) + "\", message: \"" + escape(message) + "\" });\n";
176     }
177 }

OK now with Atmosphere , the same code consist of:

  1 package org.atmosphere.samples.chat.resources;
  2 
  3 import javax.ws.rs.Consumes;
  4 import javax.ws.rs.GET;
  5 import javax.ws.rs.POST;
  6 import javax.ws.rs.Path;
  7 import javax.ws.rs.Produces;
  8 import javax.ws.rs.WebApplicationException;
  9 import javax.ws.rs.core.MultivaluedMap;
 10 import org.atmosphere.annotation.Broadcast;
 11 import org.atmosphere.annotation.Schedule;
 12 import org.atmosphere.annotation.Suspend;
 13 import org.atmosphere.util.XSSHtmlFilter;
 14 
 15 @Path("/")
 16 public class ResourceChat {
 17 
 18     @Suspend
 19     @GET
 20     @Produces("text/html;charset=ISO-8859-1")
 21     public String suspend() {
 22         return "";
 23     }
 24 
 25     @Broadcast({XSSHtmlFilter.class, JsonpFilter.class})
 26     @Consumes("application/x-www-form-urlencoded")
 27     @POST
 28     @Produces("text/html;charset=ISO-8859-1")
 29     public String publishMessage(MultivaluedMap form) {
 30         String action = form.getFirst("action");
 31         String name = form.getFirst("name");
 32 
 33         if ("login".equals(action)) {
 34             return ("System Message" + "__" + name + " has joined.");
 35         } else if ("post".equals(action)) {
 36             return name + "__" + form.getFirst("message");
 37         } else {
 38             throw new WebApplicationException(422);
 39         }
 40     }
 41 }

OK so what’s the deal? What’s makes Atmosphere so easy? The Servlet 3.0 new Async API offers:

  • Method to suspend a response, HttpServletRequest.startAsync()
  • Method to resume a response: AsyncContext.complete()

Atmosphere offers:

  • Annotation to suspend: @Suspend
  • Annotation or resume: @Resume
  • Annotation to broadcast (or push) events to the set of suspended responses: @Broadcast
  • Annotation to filter and serialize broadcasted events using BroadcasterFilter (XSSHtmlFilter.class, JsonpFilter.class)
  • Build it support for all browser implementation incompatible implementation (ex: no need to output comments like in the Servlet 3.0 sample (line 69)). Atmosphere will workaround all those issues for you.

With Servlet 3.0 Async API, the missing part is how you share information with suspended responses. In the current chat sample, you need to creates your own Thread/Queue in order to broadcast events to your set of suspended responses (line 32 to 56). This is not a big deal, but you will need to do something like that for all your Servlet 3.0 Async based applications…or use a Framework that do it for you!.

Still not convinced? Well, you can write your Atmosphere applications today and not have to wait for Servlet.3.0 implementation (OK easy plug for my other project: GlassFish v3 supports it pretty well!). Why? Atmosphere always auto-detected the best asynchronous API when you deploy your application. It always try first to look up the 3.0 Async API. If it fails, it will try to find WebServer’s native API like Grizzly Comet (GlassFish), CometProcessor (Tomcat), Continuation (Jetty), HttpEventServlet (JBossWeb), AsyncServlet (WebLogic), Google App Engine (Google). Finally, it will fallback to use a blocking I/O Thread to emulate support for asynchronous events.

But you don’t want to use Java? Fine, try the Atmosphere Grails Plug In, or Atmosphere in PrimesFaces if you like JSF, or use Scala:

  1 package org.atmosphere.samples.scala.chat
  2 
  3 import javax.ws.rs.{GET, POST, Path, Produces, WebApplicationException, Consumes}
  4 import javax.ws.rs.core.MultivaluedMap
  5 import org.atmosphere.annotation.{Broadcast, Suspend}
  6 import org.atmosphere.util.XSSHtmlFilter
  7 
  8 @Path("/chat")
  9 class Chat {
 10 
 11     @Suspend
 12     @GET
 13     @Produces(Array("text/html;charset=ISO-8859-1"))
 14     def suspend() = {
 15         ""
 16     }
 17 
 18     @Broadcast(Array(classOf[XSSHtmlFilter],classOf[JsonpFilter]))
 19     @Consumes(Array("application/x-www-form-urlencoded"))
 20     @POST
 21     @Produces(Array("text/html;charset=ISO-8859-1"))
 22     def publishMessage(form: MultivaluedMap[String, String]) = {
 23         val action = form.getFirst("action")
 24         val name = form.getFirst("name")
 25 
 26         val result: String = if ("login".equals(action)) "System Message" + "__" + name + " has joined."
 27              else if ("post".equals(action)) name + "__" + form.getFirst("message")
 28              else throw new WebApplicationException(422)
 29 
 30         result
 31     }
 32 
 33 
 34 }

Echec et Mat!

Now, I can understand you already have an existing application and just want to update it with suspend/resume/broadcast functionality, without having to re-write it completely. Fine, let’s just use the Atmosphere’s Meteor API:

  1 package org.atmosphere.samples.chat;
  2 
  3 import java.io.IOException;
  4 import java.util.LinkedList;
  5 import java.util.List;
  6 import javax.servlet.http.HttpServlet;
  7 import javax.servlet.http.HttpServletRequest;
  8 import javax.servlet.http.HttpServletResponse;
  9 import org.atmosphere.cpr.BroadcastFilter;
 10 import org.atmosphere.cpr.Meteor;
 11 import org.atmosphere.util.XSSHtmlFilter;
 12 
 13 public class MeteorChat extends HttpServlet {
 14 
 15     private final List list;
 16 
 17     public MeteorChat() {
 18         list = new LinkedList();
 19         list.add(new XSSHtmlFilter());
 20         list.add(new JsonpFilter());
 21     }
 22 
 23     @Override
 24     public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException {
 25         Meteor m = Meteor.build(req, list, null);
 26 
 27         req.getSession().setAttribute("meteor", m);
 28 
 29         res.setContentType("text/html;charset=ISO-8859-1");
 30         res.addHeader("Cache-Control", "private");
 31         res.addHeader("Pragma", "no-cache");
 32 
 33         m.suspend(-1);
 34         m.broadcast(req.getServerName() + "__has suspended a connection from " + req.getRemoteAddr());
 35     }
 36 
 37     public void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException {
 38         Meteor m = (Meteor)req.getSession().getAttribute("meteor");
 39         res.setCharacterEncoding("UTF-8");
 40         String action = req.getParameterValues("action")[0];
 41         String name = req.getParameterValues("name")[0];
 42 
 43         if ("login".equals(action)) {
 44             req.getSession().setAttribute("name", name);
 45             m.broadcast("System Message from " + req.getServerName() + "__" + name + " has joined.");
 46             res.getWriter().write("success");
 47             res.getWriter().flush();
 48         } else if ("post".equals(action)) {
 49             String message = req.getParameterValues("message")[0];
 50             m.broadcast(name + "__" + message);
 51             res.getWriter().write("success");
 52             res.getWriter().flush();
 53         } else {
 54             res.setStatus(422);
 55 
 56             res.getWriter().write("success");
 57             res.getWriter().flush();
 58         }
 59     }
 60 }

Servlet 3.0 Async API is Game Over! Finally I must admit that Servlet 3.0 Async API have asynchronous dispatcher you can use to forward request asynchronously:

    public void doGet(HttpServletRequest req, HttpServletResponse res)
            throws IOException, ServletException {

        final AsyncContext ac = req.startAsync();
        final String target = req.getParameter("target");

        Timer asyncTimer = new Timer("AsyncTimer", true);
        asyncTimer.schedule(
            new TimerTask() {
                @Override
                public void run() {
                    ac.dispatch(target);
                }
            },
        5000);
    }

With Atmosphere, the same code will works but your application will only works when deployed on Servlet 3.0 WebServer. Instead, you can implement the same functionality using Broadcast’s delayed broadcast API and still have a portable application without limiting you with Servlet 3.0 Async API…that’s something I will talk in my next blog!

For any questions or to download Atmosphere, go to our main site and use our Nabble forum (no subscription needed) or follow us on Twitter and tweet your questions there!

var gaJsHost = ((“https:” == document.location.protocol) ? “https://ssl.&#8221; : “http://www.&#8221;);
document.write(unescape(“%3Cscript src=’” + gaJsHost + “google-analytics.com/ga.js’ type=’text/javascript’%3E%3C/script%3E”));

var pageTracker = _gat._getTracker(“UA-3111670-3″);
pageTracker._initData();
pageTracker._trackPageview();

technorati:

Categories: Uncategorized

Writing a RESTful and Comet based PubSub application using Atmosphere in less than 10 lines

November 3, 2009 3 comments

Writing a publisher/subscriber (PubSub) is quite simple with Atmosphere using the atmosphere-jersey module.

Boo

The main idea here is to use Comet for suspending the response when a client subscribe to a topic, and use REST for publishing messages to the those suspended responses. First, let’s bind our application to the root uri using the @path annotation

package org.atmosphere.samples.pubsub;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import org.atmosphere.annotation.Broadcast;
import org.atmosphere.annotation.Schedule;
import org.atmosphere.annotation.Suspend;
import org.atmosphere.cpr.Broadcaster;
import org.atmosphere.jersey.Broadcastable;

@Path("/") public class PubSub {

Next, let’s implement the subscribe operation by using the @Suspend and inject our Atmosphere’s Broadcaster using the @PathParam annotation:

    @Suspend
@GET
@Path("/{topic}")
@Produces("text/plain;charset=ISO-8859-1")
public Broadcastable subscribe(@PathParam("topic") Broadcaster topic) {
return new Broadcastable("",topic);
}

 

The code above will be invoked when we want to create a new topic (@Path("/{topic}"), we will return a Broadcastable instance which will tell Atmosphere to use the passed Broadcaster, who got injected using the @PathParam, when broadcasting {topic}. Finally, the underlying response will be suspended forever via @Suspend annotation. Next, let’s implement the publish operation:

    @GET
@Path("/{topic}/{message}")
@Produces("text/plain;charset=ISO-8859-1")
@Broadcast
public Broadcastable publish(@PathParam("topic") Broadcaster topic,
@PathParam("message") String message){

return new Broadcastable(message,topic);
}

To publish, we just need to send a uri that takes the form of "/{topic}/{message}" as defined by @Path("/{topic}/{message}") and again, the Broadcaster we want to use to broadcast messagse will be injected based on the {topic} value. Finally, we just return a Broadcastable object and let Atmosphere broadcast the value to all suspended connections. That’s it! Now we can see it in action by doing:

Create a topic
curl http://localhost:8080/atmosphere-pubsub/myAtmosphereTopic
Publish to that topic
curl http://localhost:8080/atmosphere-pubsub/myAtmosphereTopic/Atmosphere_is_cool

The source for the entire sample can be viewed here. Really simple, is it?

For any questions or to download Atmosphere, go to our main site and use our Nabble forum (no subscription needed) or follow us on Twitter and tweet your questions there!

var gaJsHost = ((“https:” == document.location.protocol) ? “https://ssl.&#8221; : “http://www.&#8221;);
document.write(unescape(“%3Cscript src=’” + gaJsHost + “google-analytics.com/ga.js’ type=’text/javascript’%3E%3C/script%3E”));

var pageTracker = _gat._getTracker(“UA-3111670-3″);
pageTracker._initData();
pageTracker._trackPageview();

technorati:

Categories: Atmosphere
Follow

Get every new post delivered to your Inbox.

Join 50 other followers