Archive

Archive for October, 2008

Entering the Atmosphere Framework: Comet for Everyone, Everywhere

Introducing Atmosphere, a new framework for building portable Comet based applications. Yes, portable, which means it can run on Tomcat, Jetty, Grizzly/GlassFish or any web server that support Servlet 2.5 … and without the needs to learn all those private API floating around…

IMG_0090 copy.JPG

Currently, writing a portable Comet application is impossible: JBossWeb has AIO, Tomcat has its a different AIO API, Jetty has its Continuation API and pre Servlet 3.0 API support, Grizzly has its Comet Framework and Grizzlet API, etc. So, framework like DWR, ICEFaces and Bindows all added native support and abstracted a layer in order to support different Comet API. Worse, if your application uses those API directly, then you are stuck with one Web Server. Not bad if you are using Grizzly Comet, but if you are using the competitor, then you cannot meet the Grizzly!

The current Servlet EG are working on a proposal to add support for Comet in the upcoming Servlet 3.0 specification, but before the planet fully supports the spec it may takes ages. And the proposal will contains a small subset of the current set of features some containers already supports like asynchronous I/O (Tomcat, Grizzly), container-managed thread pool for concurrently handling the push operations, filters for push operations. etc. Not to say that using Atmosphere, framework will not longer have to care about native implementation, but instead build on top of Atmosphere. Protocol like Bayeux will comes for free, and will run on all WebServer by under the hood using their native API.

So I’m launching Atmosphere, hoping to close the gap and simplify the creation of Comet based application based on the experience/feedback I got since two year with the Grizzly Comet Framework. Atmosphere is a POJO based framework using Inversion of Control (IoC), trying to bring Ajax Push/Comet to the masses! Atmosphere build on top of Jersey and Grizzly Comet code. Now I’ve to be honest, the project is just starting (got some troubles internally since I’ve leaked the information :-)) and it might takes a couple of months before I can support all WebServer. What I’m targeting is to evolve the Grizzlet concept and make the programming model really easy. So far what I have looks like:


/**
 * This Grizzlet can only receive push from that application, e.g. from other Grizzlet defined under the myGrizzlet/* path 
 * (defined using the @Path annotation)
 */
@Grizzlet(Grizzlet.Scope.APPLICATION)
@Path("myGrizzlet")
public class MyGrizzlet{
    /**
     * Broadcast notification to all Grizzlet defined inside the current VM.
     */
    @Broadcaster(Grizzlet.Scope.VM)
    private Broadcaster bc;
    
    /**
     * Suspend the connection for 6000 milliseconds on GET request, and push the 
     * return value.
     */
    @Suspend(6000)
    @GET     
    @Push
    public String onGet(){        
        bc.brodcast("A new user has connected");
        return "Suspending the connection";
    }


    /**
     * On POST, push the return value.
     */
    @POST     
    @Push
    public String onPost(@Context HttpServletRequest req,
                         @Context HttpServletResponse res) throws IOException{  
        res.setStatus(200);
        res.getWriter().println("OK, info pushed");
        return req.getParameter("chatMessage");
    }    
    
    /**
     * Resume the connection after one push (long polling), and push the return
     * value.
     */
    @Resume(1)
    @Push
    public String onPush(String event,@Context HttpServletResponse res) throws IOException{
        res.getWriter().println(event);  
        return "Resuming the connection";
    }
}

The example above is of course quite simple, but it demonstrate the goal I have with Atmosphere: make it easy for anybody to write Comet application. The above is a ridiculous Chat application which suspend request when a GET is sent, push data on POST and just blindly write the “pushed” data.

So, In the upcoming weeks I will start giving more examples and more important, will push the code to the repository. I’m unfortunately distracted by other projects I’m working on so the project might start slowly, but my goal is to build a strong community like I did with the Grizzly Project, hence the project evolve faster and open of anybody….Interested? Just sign to the Atmosphere mailing list. I’ve plenty of work for anybody interested to participate!

var gaJsHost = ((“https:” == document.location.protocol) ? “https://ssl.” : “http://www.”);
document.write(unescape(“%3Cscript src='” + gaJsHost + “google-analytics.com/ga.js’ type=’text/javascript’%3E%3C/script%3E”));

var pageTracker = _gat._getTracker(“UA-3111670-3″);
pageTracker._initData();
pageTracker._trackPageview();

technorati:

Categories: Uncategorized

Preventing Rogue Applications to affect overall performance of Glassfish Prelude

An application server can get in a really bad shape when a rogue application/component gets deployed into it. How to prevent the situation using GlassFish Prelude? With the help of the bear, yes , you can minimize those rogues animals…

Picture 470.jpg

Just in time for the upcoming v3 Prelude release, Alexey and I have added a feature that add support for web applications isolation. You can isolate rogue components/applications by allocating a subset of the available threads or heap memory. I’ve already described the feature when Ajax based application are deployed in GlassFish, but this time you can apply the same technique for isolating applications from others’ bad behavior. OK, but under which circumstances you want to do that? Well, there are several situations where you don’t want your application to be affected by other deployed applications:

  • Delayed response: when GlassFish is under load, you want to make sure your application will never get delayed by other applications who are doing expensive calculation
  • All thread deadlocks: An application using JDBC might eventually eat a significant amount of GlassFish’s WorkerThread because of the remote database. All Threads might ends up in a deadlock state where they are waiting for a response from the remote database. Worse, all your threads can lock, and there will be no available thread for servicing incoming requests
  • And many other reasons…

OK, so how can we achieve such isolation using GlassFish v3 Prelude? Using Grizzly’s ProtocolFilter!

Let’s recap, from a previous blog, how it usually work in GlassFish when a request comes in. When requests comes in, the Grizzly HTTP module on which Prelude build on top of of, put them into a queue (see below)

1.jpg

When a WorkerThread becomes available, it get a request from the queue and execute it. When no thread are available, requests are waiting in the queue to be proceeded (in red below)

2.jpg

Now the normal behavior is to place the request at the end of the queue so every connection (or users) are equally/fairly serviced. Independently of how the request is executed, an application who needs to update its content real time (or very fast) might face a situation where the request is placed at the end of the queue, delaying the response from milliseconds to seconds. Hence, the usability of the application might significantly suffer if the server is getting under load and the queue is very large, or if a rogue/slow applications has already reserved the majority of the threads.

One solution isolate your application of the rogues application. How? By examining incoming requests and assign them to priority queues. Being able to prioritize requests might significantly improve the usability of an application and prevent rogue applications to affect its environment. Why? Because with resource isolation, you can make sure that specific requests will always be executed first and never placed into a queue, by either being placed at the head of the queue or by being executed by another queue:

3.jpg

As an example, request taking the form of /myApp/realTime might always gets executed before /rogueApp/

Want to try it? Then download GlassFish v3 Prelude and do the following (you can use the admin-gui if you don’t want to edit the file manually):


% Install GlassFish
% Edit ${glassfish.home}/domains/domain1/config/domain.xml
% Under element http-listener, add the following line
  <property name="rcmSupport" value="true"/>
% Under jvm-options element, add:
  <jvm-options>-Dcom.sun.grizzly.rcm.policyMetric= 
  /yourApp/requestURI1|0.5,/yourApp/requestURI2|0.3</jvm-options>
% Start GlassFish

In the example above, Grizzly will reserve 50% of the thread to request taking the form of /yourApp/requestURI1, 30% for yourApp/requestURI2 and the remaining for all other incoming requests. Technically, it will means three queues will be created and Grizzly will dispatch the request to them based on the request URI

In conclusion, being able to isolate rogues applications/component on some policy rules (here request based) might significantly improve performance of your application. Have doubt? Just try it :-)

_uacct = “UA-3111670-1″;
urchinTracker();

technorati:

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.

Join 50 other followers