Archive

Archive for the ‘GlassFish’ Category

GlassFish Vee(gri)zzly(v3): Unofficial benchmarks results

We are still working hard on GlassFish v3 and will soon release a new technology preview (JavaOne is coming :-)). What about its performance? As you may know, in v3 we have cleaned up the noise by using a modular approach (thanks Jerome!). In Quebecois (French peoples are probably using some English world to say the same ;-), we say “Faire le Grand Menage”

mathieu 071.jpg

What about Grizzly-In-V3? In v3, the monster is now integrated using its framework module (no http pollution like in v2, just NIO:-)). So when v3 starts, part of its startup logic is done on top of Grizzly (using 1.7.3.1 runtime). Does it make a difference at startup and more important, at runtime? It make a huge one because now, with the modular nature of v3 there is no longer modules that add noise in Grizzly. So let’s benchmark the static resources (.html) performance using Faban (see Scott’s blog about it what is Faban) by using

java -classpath fabandriver.jar:fabancommon.jar -server 
-Xmx1500m -Xms1500m com.sun.faban.driver.ab -c 2000 http://xxx:8080/index.html

This command runs 2000 separate clients (each in its own thread), each of which continually requests index.html with no think time. The common driver reports three pieces of information: the number of requests served per second, the average response time per request, and the 90th percentile for requests: 90% of requests were served with that particular response time or less. Let’s focus on the number of operations per seconds for now by running the test against Grizzly Http 1.0.20, Grizzly Http 1.7.3, v2 and v3:

Grizzly_1.0 Grizzly_1.7 GlassFish_v2 GlassFish_v3
 5739.025    5979.917    5432.300     5882.808

Hey hey v3 is almost as fast as Grizzly 1.7.3. Why? It really because v3 run on top of Grizzly directly, without any noise in between. Now let’s compare the dynamic resource performance using a very simple Servlet


    public void doPost(HttpServletRequest request, HttpServletResponse response){
        response.setContentType("text/plain");
        response.setStatus(HttpServletResponse.SC_OK);
        PrintWriter wrt = response.getWriter();
        for (int i = 0; i < 1 ; i++) {
            wrt.write("Hello world");
        }
    }

Yes, its a dummy Servlet!


Grizzly_1.7 GlassFish_v2 GlassFish_v3
 3621.600    3256.775      2998.225

v3 is a little slower here……most probably because of the container’s mapping algorithm. What is that? With v3, a tcp/udp port can handle any type of web applications (jruby, groovy, servlet, phobos, etc. (all build on top of Grizzly http :-)). So when a request comes in, v3 inspects and dispatch it to the proper container. That code introduces overhead and I suspect it is part of the regression. But a lot of changes has been made in v3, so I might be wrong…still, I consider the number very impressive as we have now a modular architecture which required a lot of changes.

But wait a minute. Grizzly 1.7 has a Servlet Container? Naaa not a real one. I’m just experimenting a tiny Servlet container, with great help from the Grizzly community. But forget Grizzly for now and look at the number :-) The goal for our v3 official release is set: We want to be better than Grizzly, which I’m sure we will :-). We need to improve the Grizzly extension in v3…in case you want to learn how to write such extension, just stop by us at JavaOne!

_uacct = “UA-3111670-1″;
urchinTracker();

technorati:

Categories: GlassFish, Grizzly

Writing a Comet web application using GlassFish

This blog describes how to write Comet enabled web application using GlassFish’s Comet Engine.

A couple of months ago, I’ve blogged about the technical details of the GlassFish‘s Comet support. Since then, I’ve got a lot of feedbacks on the blog and also privately. Surprisingly, a lot of peoples have started using the API and an asked for a blog describing a basic example. So here it comes … a basic Chat Servlet :-)

werixc.jpg

First, to enable Comet Support in GlassFish, add the following in ${glassfish.home}/domains/domain1/config/domain.xml


        <http-listener acceptor-threads="1" address="0.0.0.0" 
           blocking-enabled="false" default-virtual-server="server"
           enabled="true" family="inet" id="http-listener-1" port="8080"
           security-enabled="false" server-name="" xpowered-by="true">
                <property name="cometSupport" value="true"/>
        </http-listener>

Next, add in you web.xml:


        <load-on-startup>0</load-on-startup>

OK now the interesting parts. The first things to decide when writing a Comet enabled web app is the component that will get polled. For this example, I will use a Servlet. First, the Servlet needs to register to the CometEngine:


   48     public void init(ServletConfig config) throws ServletException {
   49         super.init(config);
   50         contextPath = config.getServletContext().getContextPath() + "/chat";
   51         CometEngine cometEngine = CometEngine.getEngine();
   52         CometContext context = cometEngine.register(contextPath);
   53         context.setExpirationDelay(60 * 1000);
   54     }

The important part to define first is the context path that will be considered for Comet processing (or polling). All requests that takes the form of http://:/context/chat will be considered for polling. The context.setExpirationDelay() will determine how long a request will be polled. For this example, I’ve set the expiration delay to 60 seconds. After 60 seconds, the polled connection will be closed.

Next, you need to define a Comet request Handler which will get invoked every time the CometContext is updated. For the Chat, the handler will be created after the user has entered its user name (by issuing http://…/login.jsp)


   71                 if ("login".equals(action)) {
   72                     String username = request.getParameter("username");
   73                     request.getSession(true).setAttribute("username", username);
   74
   75                     if (firstServlet != -1){
   76                          cometContext.notify("User " + username
   77                           + " from " + request.getRemoteAddr()
   78                           + " is joinning the chat.",CometEvent.NOTIFY,
   79                                  firstServlet);
   80                     }
   81
   82                     response.sendRedirect("chat.jsp");
   83                     return;
   84                 } else if ("post".equals(action)){
   85                     String username = (String) request.getSession(true)
   86                         .getAttribute("username");
   87                     String message = request.getParameter("message");
   88                     cometContext.notify("[ " + username + " ]  "
   89                             + message + "<br/>");
   90                     response.sendRedirect("post.jsp");
   91                     return;
   92                 } else if ("openchat".equals(action)) {
   93                     response.setContentType("text/html");
   94                     CometRequestHandler handler = new CometRequestHandler();
   95                     handler.clientIP = request.getRemoteAddr();
   96                     handler.attach(response.getWriter());
   97                     cometContext.addCometHandler(handler);
   98                     String username = (String) request.getSession(true)
   99                         .getAttribute("username");
  100                     response.getWriter().println("<h2>Welcome "
  101                             + username + " </h2>");
  102                     return;

After the user has logged in, the browser will be redirected to the chat.jsp page, which will sent the action=”openchat”. The CometHandler (the class that will update the chat message box) implementation looks like:


  134         public void onEvent(CometEvent event) throws IOException{
  135             try{
  136
  137                 if (firstServlet != -1 && this.hashCode() != firstServlet){
  138                      event.getCometContext().notify("User " + clientIP
  139                       + " is getting a new message.",CometEvent.NOTIFY,
  140                              firstServlet);
  141                 }
  142                 if ( event.getType() != CometEvent.READ ){
  143                     printWriter.println(event.attachment());
  144                     printWriter.flush();
  145                 }
  146             } catch (Throwable t){
  147                t.printStackTrace();
  148             }
  149         }
  150
  151
  152         public void onInitialize(CometEvent event) throws IOException{
                      ....
  156         }

Every time the user will post a new message, the CometHandler.onEvent(…) will be invoked and the Chat message pushed back to the browser.

On the client side, the chat.jsp page looks like


     26 <frameset>
     27   <iframe name="chat" src ="/comet/chat?action=openchat" width="100%" scrolling="auto"></iframe>
     28   <iframe name="post" src="post.jsp" width="100%" scrolling="no"/>
     29 </frameset>

You can download the application (which include the src) here.

Note that the application described here is really to give an example. I would never recommend the use of static variables like I did in the example.

werixc96.jpg

Before I forgot, one interesting feature I’ve recently added (was requested on first blog on Grizzly’s Comet) is the ability to update a single CometHandler (or a single polled request). When calling cometContext.addCometHandler(..), the returned value can be later re-used to push datas only to that cometHandler by doing:


     cometContext.notify(String message, int type, String cometListenerID);

See the API for more info. For the Chat example, I’ve added a pop up window where a chat moderator receives all the chat messages, who is connected and from where:


    138   event.getCometContext().notify("User " + clientIP
    139      + " is getting a new message.",CometEvent.NOTIFY,
    140        firstServlet);

That’s it. Very simple, is it? No needs to spawn a thread anywhere on the Servlet side, no special Servlet operations, etc.

Once I’ve a chance, I will try to use AJAX and improve the client. Any help is appreciated on that side :-) As usual, thanks for all the feedbacks sent by emails!

_uacct = “UA-3111670-1″;
urchinTracker();

technorati:

Categories: GlassFish, Grizzly

Extending GlassFish’s WebContainer

The GlassFish WebContainer is based on the Tomcat Servlet implementation called Catalina. The Catalina architecture, introduced in Tomcat 4.x, was designed to easily allow developer to extends it. There is several interception point in Catalina that can be extended:

  • Valve: A Valve is a request processing component associated with a particular virtual-server(host) or web-module (servlet). A series of Valves are generally associated with each other into a Pipeline. Developer will usually inject their valves in order to have access to the Catalina internal objects, and manipulate the request/response object before filters or servlets are invoked. As an example, the access logging mechanism in GlassFish is implemented as a Valve.
  • ContainerListener: A listener for significant Container generated events. As an example, every time a WAR is deployed, a ContainerListener implementation will be notified for every new created Servlet, TagLib or Filter.
  • InstanceListener: A listener for significant events related to a specific servlet instance. An implementation of this interface will get notified everytime a Servlet is about to be invoked or when some operations are made on the Servlet (like when calling its ServletContextListeners).
  • LifecycleListener: A listener for significant events (including “virtual server start” and “web module stop”) generated by a component that implements the Lifecycle interface. An implementation of this interface will get notified by mostly all internal objects when an it is started|stopped (like when a new virtual-server is created). As an example, we are using this interface internally to notify the other GlassFish’s containers.

Virtual server supports the injection of Valve, ContainerListener and LifecycleListener. Web module supports the same as the virtual server with the addition of the InstanceListener.

Once you have your extension implemented, you need to install it in GlassFish. To install an extension:

For virtual-server

  • Add your classes into a jar file (ex: glassfish-ext.jar)
  • Place your jar under ${glassfish.home}/lib
  • Edit ${glassfish.home}/domains/domain1/config/domain.xml, locate the virtual-server element, and add <property name=”type” value=”fully qualified class name“> where type can be valve_ or listener_, with a unique number allowing you to define more than one type.

    Ex: <property name=”valve_1″ value=”org.apache.catalina.valves.RequestDumperValve”/>

For web-module

  • Same as for virtual-server, but under the web-module element of domain.xml. Except you can bundle your implementation under WEB-INF/lib or WEB-INF/classes (easier to deploy to several GlassFish installations)

    Ex: <property name=”listener_1″ value=”org.glassfish.MyServletListener”/>

Very easy, is it :-). This is supported starting in GlassFish v2 build 17.

_uacct = “UA-3111670-1″;
urchinTracker();

technorati:

Categories: GlassFish

Enabling CGI support in GlassFish

Wants to execute CGI in GlassFish? The GlassFish CGI support is disabled by default, and it is based on the Tomcat implementation. Two ways to enable it: only inside your web application or for all web applications deployed in GlassFish. I strongly recommend you enable it inside your application only for security reason, but if you control the applications deployed in GlassFish, then enabling it to all web applications might be simpler. For your web application, you will add the information in your WEB-INF/web.xml. To enable it globally, you will add it in $glassfish.home/domains/domain1/config/default-web.xml.

First, define the CGI Servlet:

  <!-- Common Gateway Includes (CGI) processing servlet, which supports     -->
  <!-- execution of external applications that conform to the CGI spec      -->
  <!-- requirements.  Typically, this servlet is mapped to the URL pattern  -->
  <!-- "/cgi-bin/*", which means that any CGI applications that are         -->
  <!-- executed must be present within the web application.  This servlet   -->
  <!-- supports the following initialization parameters (default values     -->
  <!-- are in square brackets):                                             -->
  <!--                                                                      -->
  <!--   cgiPathPrefix        The CGI search path will start at             -->
  <!--                        webAppRootDir + File.separator + this prefix. -->
  <!--                        [WEB-INF/cgi]                                 -->
  <!--                                                                      -->
  <!--   debug                Debugging detail level for messages logged    -->
  <!--                        by this servlet.  [0]                         -->
  <!--                                                                      -->
  <!--   executable           Name of the exectuable used to run the        -->
  <!--                        script. [perl]                                -->
  <!--                                                                      -->
  <!--   parameterEncoding    Name of parameter encoding to be used with    -->
  <!--                        CGI servlet.                                  -->
  <!--                        [System.getProperty("file.encoding","UTF-8")] -->
  <!--                                                                      -->
  <!--   passShellEnvironment Should the shell environment variables (if    -->
  <!--                        any) be passed to the CGI script? [false]     -->
  <!--                                                                      -->
 
    <servlet>
        <servlet-name>cgi</servlet-name>
        <servlet-class>org.apache.catalina.servlets.CGIServlet</servlet-class>
        <init-param>
          <param-name>debug</param-name>
          <param-value>0</param-value>
        </init-param>
        <init-param>
          <param-name>cgiPathPrefix</param-name>
          <param-value>WEB-INF/cgi</param-value>
        </init-param>
         <load-on-startup>5</load-on-startup>
    </servlet>

Then define the servlet mapping:

<servlet-mapping>
   <servlet-name>cgi</servlet-name>
   <url-pattern>/cgi-bin/*</url-pattern>
</servlet-mapping>

If you added this info in default-web.xml, you gonna need to restart GlassFish. Then in you web application, creates a folder under

WEB-INF/cgi/

and drop you CGI under than folder, then deploy your application. You will be able to execute your CGI by doing:


http://host:port/yoor-app/cgi-bin/your_script

Voila!
technorati:

_uacct = “UA-3111670-1″;
urchinTracker();

Categories: GlassFish

Enabling WebDav in GlassFish

It is very easy to enable WebDav in GlassFish, but you can’t enable it the way you usualy do it using Tomcat. The reason is when you define, in defaul-web.xml, the WebDav servlet, you can’t protect it because GlassFish lack of support for a default-sun-web.xml, where you usualy define your security constraints mapping. You certainly don’t want to enable WebDav without protecting the functionality with a security constraint, because if not protected, then everybody will be able to make modification to your content.

First, enable the WebDav Servlet in your web-xml:


<servlet>
<servlet-name>webdav</servlet-name> <servlet-class>org.apache.catalina.servlets.WebdavServlet</servlet-class>
<init-param>
<param-name>debug</param-name>
<param-value>0</param-value>
</init-param>
<init-param>
<param-name>listings</param-name>
<param-value>true</param-value>
</init-param>
<init-param>
<param-name>readonly</param-name>
<param-value>false</param-value>
</init-param>
</servlet>

Then define the servlet mapping associated with your WebDav servlet:


<servlet-mapping>
<servlet-name>webdav</servlet-name>
<url-pattern>/webdav/*</url-pattern>
</servlet-mapping>

You are now ready to use any WebDav client by doing connecting to:

http://host:port/<>/webdav/file

As an example, I did using Office 2003:


File > Open > http://192.168.0.101:8080/glassfish-webday/webdav/index.html

and changed the index.html page. Next step is to protect the WebDav support because everybody can now make modifications to your content:


<security-constraint>
<web-resource-collection>
<web-resource-name>Login Resources</web-resource-name>
<url-pattern>/webdav/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Admin</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>NONE</transport-guarantee>
</user-data-constraint>
<login-config>
<auth-method>BASIC</auth-method>
<realm-name>default</realm-name>
</login-config>
<security-role>
<role-name>Admin</role-name>
</security-role>
</security-constraint>

and then define, in your sun-web.xml:


<security-role-mapping>
<role-name>Admin</role-name>
<group-name>Admin</group-name>
</security-role-mapping>

Next, create you user and password.


asadmin create-file-user --user admin --host localhost --port 4848 --terse=true --groups Admin --authrealmname default admin

You are now ready to use WebDav with GlassFish
technorati:

_uacct = “UA-3111670-1″;
urchinTracker();

Categories: GlassFish

Can a Grizzly run faster than a Coyote?

Un coyote court t-il plus vite qu’un grizzly? Or can an NIO based HTTP connector be as fast as a traditional IO HTTP Connector or a C HTTP Connector? The next couple of lines will compare Tomcat 5.5.16, both the Coyote HTTP11 and the Tomcat Native (version 1.1.2) connector (aka APR) with the GlassFish Grizzly NIO powered HTTP Connector. Grizzly is an NIO extension of the HTTP11 implementation.

But don’t be fooled by the numbers I’m gonna publish. My goal here is to clarify the myth that NIO non-blocking sockets cannot be used along with the HTTP protocol. This blog is not against Tomcat, and I’m still part of the Tomcat community (although I’m not helping a lot those days, but very interested by Costin works on NIO). OK enough rant…..

First, if my numbers aren’t matching your real life application, I will be interested to hear about it. If you like APR/OpenSSL functionalities and think they should be included in GlassFish, I will be more than happy to port them in GlassFish. But let waits for the numbers before saying yes :-)

Oh…BTW some Grizzly numbers has already been published as part of the SJSAS PE 8.2 specJ2004 results. All results can be found here.

Passons maintenant aux choses serieuses….

Differences between Tomcat and GlassFish
First, in order to compare the two, let’s explore the differences between the products. Since GlassFish is a J2EE Container, the bundled WebContainer has to support more extensions than Tomcat. Those extensions mainly consist of supporting EJB and the JavaTM Authorization Contract for Containers. Both extensions have an impact on performance because internaly, it means extra events notification needs to happen (In Catalina, in means LifecycleEvent).

A perfect integration will means no performance regressions when extensions are added, but unfortunaly having to support EJB is adding a small performance hit. Fortunalty, JavaTM Authorization Contract for Containers isn’t impacting performance. Hence the best comparison would have been to compare JBoss and GlassFish, or Tomcat with Grizzly in front of it. But I’m too lazy to install JBoss….


Out-of-the-box difference

The main difference are:
+ GlassFish has Single Sign On enabled, Tomcat doesn’t
+ GlassFish has Access Logging enabled, Tomcat doesn’t
+ GlassFish starts using the java -client, Tomcat doesn’t set any.

Let’s turn off the differences in GlassFish, by adding, in domain.xml:


<property name="accessLoggingEnabled" value="false" />
<property name="sso-enabled" value="false" />
</http-service>

and starts both product using java -server. For all the benchmarks I’m using Mustang:


Java(TM) SE Runtime Environment (build 1.6.0-beta2-b75)
Java HotSpot(TM) Server VM (build 1.6.0-beta2-b75, mixed mode)

Also, Grizzly has a different cache mechanism based on NIO, where Tomcat cache static file in memory using its naming implementation. Grizzly Cache is configured using:


<http-file-cache file-caching-enabled="true" file-transmission-enabled="false" globally-enabled="true" hash-init-size="0" max-age-in-seconds="600" max-files-count="1024" medium-file-size-limit-in-bytes="9537600" medium-file-space-in-bytes="90485760" small-file-size-limit-in-bytes="1048" small-file-space-in-bytes="1048576"/>

And I’ve turned off the onDemand mechanism:

-Dcom.sun.enterprise.server.ss.ASQuickStartup=false

to fully starts GlassFish.


Envoyons de l’avant nos gens, envoyons de l’avant!

ApacheBench (ab)
Let start with a well know stress tool called ab (google ApacheBench if you don’t know it). I will use ab to compare performance of:

+ small, medium, and very large gif files (large & medium gives similar results)
+ basic Servlet
+ basic JSP (the default index.jsp page from Tomcat)

I’m gonna compare Tomcat HTTP11, Tomcat APR and GlassFish. I have tried to comes with the best configuration possible for Tomcat. I got the best number with:


<Connector useSendFile="true" sendfileSize="500" port="8080" maxHttpHeaderSize="8192" pollerSize="500" maxThreads="500" minSpareThreads="25" maxSpareThreads="250" enableLookups="false" redirectPort="8443" acceptCount="5000" connectionTimeout="20000" disableUploadTimeout="true" />

I’m putting more threads than the expected number of simultaneous connections (300) here so HTTP11 can be tested properly (since HTTP11 is one connection per thread). I’ve also try adding:

firstReadTimeout="0" pollTime="0"

but the result wasn’t as good as with the default.

For Grizzly, I’ve used:


<http-listener id="http-listener-1" ... acceptor-thread="2" .../>
<request-processing header-buffer-length-in-bytes="4096" initial-thread-count="5" request-timeout-in-seconds="20" thread-count="10" thread-increment="10"/>

Yes, you read it it correctly. Tomcat needs 500 threads where Grizzly needs only 10. NIO non-blocking is facinating, is it?

The ab command I’ve used is:

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/XXX

Ready to see numbers? Not yet, here is the machine setup:

server

OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
ARCH : x86
Type : x86
CPU : 2x3.2GHz
Memory : 4 GB

client

OS : RedHat Enterprise Linux AS 4.0
ARCH : x86
ype : x86
CPU : 2x1.4GHz
Memory : 4GB

OK, now the numbers. For each test, I’ve run 50 times the ab command and took the means (for large static resource, I’ve ran between 10 and 13 because it takes time). The number aren’t changing if I remove the ramp up time for every Connector, so I decided to not remove them.

Small static file (2k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/tomcat.gif

4k.jpg
Grizzly: 4104.32 APR: 4377.2 HTTP11: 4448.08
Here HTTP11 is doing a very good job, where Grizzly seems to be hunting and maybe servicing requests. Why? My next blog will explain a problem I’m seeing with MappedByteBuffer, small files and FileChannel.transferTo.

Medium static file (14k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/grizzly2.gif

small.jpg
Grizzly: 746.27 APR: 749.63 HTTP11: 745.65
OK here Grizzly is better (thanks MappedByteBuffer).

Very large static file (954k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/images.jar

large.jpg
Grizzly: 11.88 APR: 10.5 HTTP11 10.6.
Hum… here APR has connections errors (means 10) as well as HTTP11 (means 514) and keep-alive hasn’t been honored for all connections. Grizzly is fine on that run.

Simple Servlet

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/tomcat-test/ServletTest

Servlet.jpg
Grizzly: 10929.93 APR: 10600.71 HTTP11: 10764.67
Interesting numbers…but can’t say if it’s the Connector or Catalina (Servlet Container). GlassFish Catalina is based on Tomcat 5.0.x (plus severals performance improvement)).

Simple JSP

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/index.jsp

JSP.jpg
Grizzly: 1210.49 APR: 1201.09 HTTP11: 1191.57
The result here is amazing because Tomcat supports JSP 2.0, where GlassFish supports JSP 2.1. Kin-Man, Jacob and Jan (to name a few) doesn’t seems to have introduced performance regressions :-).

One thing I don’t like with ab is it doesn’t measure the outliers, meaning some requests might takes 0.5 seconds, some 5 seconds. All requests are counted, whatever they took to be serviced. I would prefer a Connector that avoid outlier (or at least I don’t want to be the outlier when I log on to my bank account!). Let see what the second benchmarks can tell:

Real world benchmark
The purpose of my second benchmark is to stress the server with a real world application that contains complex Servlet, JSP and Database transaction. I think ab is good indicator of performance but focus more on throughput than scalability. The next benchmark simulates an e-commerce site. Customers can browse through a large inventory of items, put those items into shopping carts, purchase them, open new accounts, get account status: all the basic things you’d expect. There is also an admin interface for updating prices, adding inventory, and so on. Each customer will execute a typical scenario at random; each hit on the website is counted as an operation.

The benchmark measures the maximum number of users that the website can handle assuming that 90% of the responses must come back within 2 seconds and that the average think time of the users is 8 seconds. The result are:

j-tpcw.jpg
Grizzly: 2850 APR: 2110 HTTP11: 1610

Note: I have try APR with 10 threads, 1000 threads and the best results was obtainned using 500 threads.

Conclusion
Do your own conclusion ;-) My goal here was to demonstrate that NIO (using non blocking sockets) HTTP Connector are ready for prime time. I hope the myth is over….

Once I have a chance, I would like to compare Grizzly with Jetty (since Jetty has an NIO implementation). I just need to find someone who knows Jetty and can help me configuring it properly :-)

Finally, during this exercise I’ve found a bug with the 2.4 kernel and the performance was really bad. Also, it is quite possible you run benchmarks where Tomcat perform better. I would be interested to learn about the way Tomcat was configured…..

// <![CDATA[// technorati:



Categories: GlassFish, Grizzly

Running GlassFish with Apache httpd

Quick blog to update the way you can front GlassFish with Apache httpd. Starting with build 41, all Jakarta Commons classes has been renamed from org.apache.* to com.sun.org.apache.* . Since mod_jk uses some of those classes, you need to add a couple of Jakarta Commons packages to make it works. Let’s do it steps by steps:

(1) First, install mod_jk

Next steps is to configure httpd.conf and worker.properties. For example, add in /etc/httpd/conf/httpd.conf:


LoadModule jk_module /usr/lib/httpd/modules/mod_jk.so
JkWorkersFile /etc/httpd/conf/worker.properties
# Where to put jk logs
JkLogFile /var/log/httpd/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel debug
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"
# Send all jsp requests to GlassFish
JkMount /*.jsp worker1
# Send all glassfish-test requests to GlassFish
JkMount /glassfish-test/* worker1

The add in your /etc/httpd/conf/worker.properties


# Define 1 real worker using ajp13
worker.list=worker1
# Set properties for worker1 (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=localhost.localdomain
worker.worker1.port=8009
worker.worker1.lbfactor=50
worker.worker1.cachesize=10
worker.worker1.cache_timeout=600
worker.worker1.socket_keepalive=1
worker.worker1.socket_timeout=300

Start httpd.

(2) Copy, from a fresh Tomcat 5.5.16 installation


cp $CATALINA_HOME/server/lib/tomcat-ajp.jar
$GLASSFISH_HOME/lib/.

Next, copy commons-logging.jar and commons-modeler.jar from the Jakarta Commons site under to $GLASSFISH_HOME/lib/.

(3) Then enable mod_jk by adding:


$GLASSFISH_HOME/bin/asadmin create-jvm-options -Dcom.sun.enterprise.web.connector.enableJK=8009

Then start GlassFish. That’s it!
technorati:

_uacct = “UA-3111670-1″;
urchinTracker();

Categories: GlassFish
Follow

Get every new post delivered to your Inbox.

Join 52 other followers