Home > Uncategorized > Using Mustang’s jmap/jhat to profile GlassFish

Using Mustang’s jmap/jhat to profile GlassFish

Recently I was working on a bug where Grizzly was leaking memory. From the code, I was convinced the leak wasn’t in Grizzly (of course :-))! The problem was occurring when GlassFish was stressed during three days. After approximatively 16 hours, some components in GlassFish weren’t able to get a file descriptor, so I’ve suspected a memory leak somewhere in other’s code ;-). The test was using JDK 1.5ur7, and it was very hard to find what caused the problem:

ERROR /export1/as90pe/domains/domain1/imq/instances/imqbroker/etc/accesscontrol.properties (Too many open files)

euh….OK I admit I wasn’t able to debug this using strace, pfiles etc. and was ready to blame the IMQ team :-). Then I’ve decided to switch to Mustang to see if I can get a better error message. Thanks to Mustang, I’ve got:

Exception in thread “main” java.lang.OutOfMemoryError: Java heap space

I decided to ping Alan to make sure he wasn’t aware of any socket leak, and he recommended I try the improved jhat. I was a little scared of using this tool, and I wasn’t sure if I should use strace/truss instead…..I was very very surprised by the new Mustang‘s jhat. Since then, I can’t live without it :-). What I did is:

% jmap -dump:file=heap.bin

And then

% jhat -J-mx512m heap.bin

This started a web server, so I did:

http://localhost:7000

Wow! From that page, I was able to browse the heap and find which object has been created by whom. The histo is a very good starting point:

http://localhost:7000/histo/

because it tells you the number of allocated objects at the time the jmap was executed. From there, you can click and see which object is having reference to what. The instance count is also helpful

http://localhost:7000/showInstanceCounts/includePlatform/

because it tells you the candidate for the memory leak. Finally the SQL query page is amazing:

http://localhost:7000/oql/

I was looking at all the current active HTTP requests, so I did:

select s from com.sun.enterprise.web.connector.grizzly.ReadTask s where s.byteBuffer.position > 0

and got them!!! I don’t think you can do this with any available profilers right now.

For Grizzly, the number of java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask was extremely high, because Future aren’t gc() unless you explicitely purge() them, which is something I didn’t expect (and I’m not the only one I’m sure). Thus I’ve decided to not use any java.util.concurrent.ScheduledThreadPoolExecutor and instead implement a better strategy for keep-aliving connections…and now the leak disappeared (not to say performance is much better!)

Of course profilers like NetBeans Profiler can always be used….but I’ve find jmap/jhat so fast and simple to use, not to say I really liked the Object Query Language (OQL) query page where I can get the exact instance I’m looking at….good work J2SE team!!

_uacct = “UA-3111670-1″;
urchinTracker();

technorati:

About these ads
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 50 other followers

%d bloggers like this: