Home > GlassFish, Grizzly > Can a Grizzly run faster than a Coyote?

Can a Grizzly run faster than a Coyote?

Un coyote court t-il plus vite qu’un grizzly? Or can an NIO based HTTP connector be as fast as a traditional IO HTTP Connector or a C HTTP Connector? The next couple of lines will compare Tomcat 5.5.16, both the Coyote HTTP11 and the Tomcat Native (version 1.1.2) connector (aka APR) with the GlassFish Grizzly NIO powered HTTP Connector. Grizzly is an NIO extension of the HTTP11 implementation.

But don’t be fooled by the numbers I’m gonna publish. My goal here is to clarify the myth that NIO non-blocking sockets cannot be used along with the HTTP protocol. This blog is not against Tomcat, and I’m still part of the Tomcat community (although I’m not helping a lot those days, but very interested by Costin works on NIO). OK enough rant…..

First, if my numbers aren’t matching your real life application, I will be interested to hear about it. If you like APR/OpenSSL functionalities and think they should be included in GlassFish, I will be more than happy to port them in GlassFish. But let waits for the numbers before saying yes🙂

Oh…BTW some Grizzly numbers has already been published as part of the SJSAS PE 8.2 specJ2004 results. All results can be found here.

Passons maintenant aux choses serieuses….

Differences between Tomcat and GlassFish
First, in order to compare the two, let’s explore the differences between the products. Since GlassFish is a J2EE Container, the bundled WebContainer has to support more extensions than Tomcat. Those extensions mainly consist of supporting EJB and the JavaTM Authorization Contract for Containers. Both extensions have an impact on performance because internaly, it means extra events notification needs to happen (In Catalina, in means LifecycleEvent).

A perfect integration will means no performance regressions when extensions are added, but unfortunaly having to support EJB is adding a small performance hit. Fortunalty, JavaTM Authorization Contract for Containers isn’t impacting performance. Hence the best comparison would have been to compare JBoss and GlassFish, or Tomcat with Grizzly in front of it. But I’m too lazy to install JBoss….


Out-of-the-box difference

The main difference are:
+ GlassFish has Single Sign On enabled, Tomcat doesn’t
+ GlassFish has Access Logging enabled, Tomcat doesn’t
+ GlassFish starts using the java -client, Tomcat doesn’t set any.

Let’s turn off the differences in GlassFish, by adding, in domain.xml:


<property name="accessLoggingEnabled" value="false" />
<property name="sso-enabled" value="false" />
</http-service>

and starts both product using java -server. For all the benchmarks I’m using Mustang:


Java(TM) SE Runtime Environment (build 1.6.0-beta2-b75)
Java HotSpot(TM) Server VM (build 1.6.0-beta2-b75, mixed mode)

Also, Grizzly has a different cache mechanism based on NIO, where Tomcat cache static file in memory using its naming implementation. Grizzly Cache is configured using:


<http-file-cache file-caching-enabled="true" file-transmission-enabled="false" globally-enabled="true" hash-init-size="0" max-age-in-seconds="600" max-files-count="1024" medium-file-size-limit-in-bytes="9537600" medium-file-space-in-bytes="90485760" small-file-size-limit-in-bytes="1048" small-file-space-in-bytes="1048576"/>

And I’ve turned off the onDemand mechanism:

-Dcom.sun.enterprise.server.ss.ASQuickStartup=false

to fully starts GlassFish.


Envoyons de l’avant nos gens, envoyons de l’avant!

ApacheBench (ab)
Let start with a well know stress tool called ab (google ApacheBench if you don’t know it). I will use ab to compare performance of:

+ small, medium, and very large gif files (large & medium gives similar results)
+ basic Servlet
+ basic JSP (the default index.jsp page from Tomcat)

I’m gonna compare Tomcat HTTP11, Tomcat APR and GlassFish. I have tried to comes with the best configuration possible for Tomcat. I got the best number with:


<Connector useSendFile="true" sendfileSize="500" port="8080" maxHttpHeaderSize="8192" pollerSize="500" maxThreads="500" minSpareThreads="25" maxSpareThreads="250" enableLookups="false" redirectPort="8443" acceptCount="5000" connectionTimeout="20000" disableUploadTimeout="true" />

I’m putting more threads than the expected number of simultaneous connections (300) here so HTTP11 can be tested properly (since HTTP11 is one connection per thread). I’ve also try adding:

firstReadTimeout="0" pollTime="0"

but the result wasn’t as good as with the default.

For Grizzly, I’ve used:


<http-listener id="http-listener-1" ... acceptor-thread="2" .../>
<request-processing header-buffer-length-in-bytes="4096" initial-thread-count="5" request-timeout-in-seconds="20" thread-count="10" thread-increment="10"/>

Yes, you read it it correctly. Tomcat needs 500 threads where Grizzly needs only 10. NIO non-blocking is facinating, is it?

The ab command I’ve used is:

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/XXX

Ready to see numbers? Not yet, here is the machine setup:

server

OS : Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
ARCH : x86
Type : x86
CPU : 2x3.2GHz
Memory : 4 GB

client

OS : RedHat Enterprise Linux AS 4.0
ARCH : x86
ype : x86
CPU : 2x1.4GHz
Memory : 4GB

OK, now the numbers. For each test, I’ve run 50 times the ab command and took the means (for large static resource, I’ve ran between 10 and 13 because it takes time). The number aren’t changing if I remove the ramp up time for every Connector, so I decided to not remove them.

Small static file (2k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/tomcat.gif

4k.jpg
Grizzly: 4104.32 APR: 4377.2 HTTP11: 4448.08
Here HTTP11 is doing a very good job, where Grizzly seems to be hunting and maybe servicing requests. Why? My next blog will explain a problem I’m seeing with MappedByteBuffer, small files and FileChannel.transferTo.

Medium static file (14k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/grizzly2.gif

small.jpg
Grizzly: 746.27 APR: 749.63 HTTP11: 745.65
OK here Grizzly is better (thanks MappedByteBuffer).

Very large static file (954k)

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/images.jar

large.jpg
Grizzly: 11.88 APR: 10.5 HTTP11 10.6.
Hum… here APR has connections errors (means 10) as well as HTTP11 (means 514) and keep-alive hasn’t been honored for all connections. Grizzly is fine on that run.

Simple Servlet

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/tomcat-test/ServletTest

Servlet.jpg
Grizzly: 10929.93 APR: 10600.71 HTTP11: 10764.67
Interesting numbers…but can’t say if it’s the Connector or Catalina (Servlet Container). GlassFish Catalina is based on Tomcat 5.0.x (plus severals performance improvement)).

Simple JSP

% ab -q -n1000 -c300 -k http://perf-v4.sfbay.sun.com:8080/index.jsp

JSP.jpg
Grizzly: 1210.49 APR: 1201.09 HTTP11: 1191.57
The result here is amazing because Tomcat supports JSP 2.0, where GlassFish supports JSP 2.1. Kin-Man, Jacob and Jan (to name a few) doesn’t seems to have introduced performance regressions🙂.

One thing I don’t like with ab is it doesn’t measure the outliers, meaning some requests might takes 0.5 seconds, some 5 seconds. All requests are counted, whatever they took to be serviced. I would prefer a Connector that avoid outlier (or at least I don’t want to be the outlier when I log on to my bank account!). Let see what the second benchmarks can tell:

Real world benchmark
The purpose of my second benchmark is to stress the server with a real world application that contains complex Servlet, JSP and Database transaction. I think ab is good indicator of performance but focus more on throughput than scalability. The next benchmark simulates an e-commerce site. Customers can browse through a large inventory of items, put those items into shopping carts, purchase them, open new accounts, get account status: all the basic things you’d expect. There is also an admin interface for updating prices, adding inventory, and so on. Each customer will execute a typical scenario at random; each hit on the website is counted as an operation.

The benchmark measures the maximum number of users that the website can handle assuming that 90% of the responses must come back within 2 seconds and that the average think time of the users is 8 seconds. The result are:

j-tpcw.jpg
Grizzly: 2850 APR: 2110 HTTP11: 1610

Note: I have try APR with 10 threads, 1000 threads and the best results was obtainned using 500 threads.

Conclusion
Do your own conclusion😉 My goal here was to demonstrate that NIO (using non blocking sockets) HTTP Connector are ready for prime time. I hope the myth is over….

Once I have a chance, I would like to compare Grizzly with Jetty (since Jetty has an NIO implementation). I just need to find someone who knows Jetty and can help me configuring it properly🙂

Finally, during this exercise I’ve found a bug with the 2.4 kernel and the performance was really bad. Also, it is quite possible you run benchmarks where Tomcat perform better. I would be interested to learn about the way Tomcat was configured…..

// <![CDATA[// technorati:



Categories: GlassFish, Grizzly
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: