CompressedOops and 32 GB Heaps

By | April 7, 2012

At work we have a Java-based service that caches a very large amount of data. I spend a lot of time optimizing performance and memory usage for this service. The amount of memory it uses at runtime to cache a sufficient amount of data for performance reasons is now reaching the the 32 GB boundary.

One downside of using a 64-bit JVM is that the object pointers used by the JVM to reference objects would normally need to increase from 4 bytes (32 bits) to 8 bytes (64 bits). But, if the heap is under 32 GB, the JVM can take a shortcut because it knows that the offset will always fit in 4 bytes.

The JVM argument -XX:+UseCompressedOops was added to force the JVM to use compressed ordinary object pointers whenever possible. In Java 6 Update 23, the JVM was updated to enable Compressed Oops by default. Compressed Oops works very well for heaps up to about 26 GBs, but can still be advantageous for larger heaps.

However, I found that with Java 6 Update 24, Compressed Oops are not used at times when they could be, even if you specify -XX:+UseCompressedOops.

Specifically, I was using a 31.7 GB heap and my service was unexpectedly running out of memory. On one of our other systems, the heap for the same service reached only 26 GB after its large internal cache was fully loaded. After a lot of investigation that weekend, I discovered that the working system was on Java 6 Update 26. I downgraded it to Update 24 and easily reproduced the problem.

I had previously done a lot of testing to see what our penalty was for crossing the 32 GB heap boundary and found that it was about 8 GB. That’s actually not too bad, as general estimates I had heard from others ranged from 30-50%. This is probably due to the fact the objects we are caching mostly have only primitive data types as fields.

I highly recommend the Everything I Ever Learned About JVM Performance Tuning @Twitter slides and presentation for more info on JVM tuning. Or read Andrew’s summary of Attila’s talk.

By reducing the cache size on the production server enough to not blow out the heap, I confirmed that the heap size when using 1.6.0.24 was about 8 GB higher than when using 1.6.0.26. JVM, meet smoking gun.

The release notes for 1.6.0.26 references a compressedoops related bug fix that may have resulted in this feature now working as described. I read through all of the many bugs fixed in that release and it seems to be the most likely candidate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.