I was doing some performance testing of a microservice component and was curious to experiment with some GC/JVM flags and came across -XX:MaxRAMPercentage which allows the services heap size to scale as a percentage of the hosts available RAM. This led me to discover an interesting gotcha that would save future headaches: compressed oops.
Oops (ordinary object pointers) is a pointer to an object. Pointers are typically the size of a machine pointer (32 bits, 64 bits). 32 bits allows you to address about 4GB of memory. For alignment and performance reasons, the JVM pads object so their size is a multiple of 8, meaning the last 3 bits are zeroes. Instead of then only using 29 bits for addressing, the JVM uses compression by maintaining 32 bit references and shifting left 3 bits to find objects and right 3 bits to put into the heap. Compressed oops effectively grant 2^35 or 32GB of addressable space using 32 bit addresses on a 64 bit machine. More than enough heap memory for most applications and much more performant than using 64 bit addresses that take up more space.
What happens if you cross 32GB of heap memory? Compressed oops are disabled and you continue using larger 64 bit addresses. As you might be able to tell, once you cross the threshold, you need to go much higher or else you're storing less data. Due to the immediate performance hit, there's a dead zone between 32GB and 48GB where your heap will store less data compared to the 31GB of compressed oops due to the 64 bit overhead.
I was experimenting changing our hardcoded -Xmx flag to container-aware -XX:MaxRAMPercentage so it could adapt to our instance type. I was going to move forward with 70%, our hosts have 32GB of memory so it would've been fine. However, after discovering this compression technique, I chose to revert the change. If we were to increase our instance type to 64GB RAM, 70% would've brought our application into the dead zone at ~45GB where we would suddenly have unexplainable heap usage regression.
Future crisis averted.