Can GraalVM Native Image Processes Be Detected by jps?#
The answer is yes, but only under specific conditions! If you’ve enabled monitoring features like jstat and jmx during compilation using the parameter --enable-monitoring=jmxserver,jmxclient,jvmstat
, then your native image processes will show up in jps. Without these flags enabled, they’ll remain invisible to jps.
Our Current Strategy: GraalVM vs JVM Usage#
Here’s how we’re strategically leveraging both technologies in our stack:
1. Lambda-Style Tasks → GraalVM Native Image#
For workloads like infrequent but data-heavy scheduled jobs (think weekly reports or ad-hoc data exports), we’re going all-in on GraalVM Native Image. Here’s why this makes perfect sense:
Why not traditional microservices?#
- Resource waste is real: You’d need to size your microservice for peak processing requirements, leaving resources idle most of the time
- The scaling dance: To be cost-effective with persistent microservices, you’d need to restart with higher memory/CPU before tasks run, then scale back down afterward - what a hassle!
Why these workloads are perfect for serverless#
- K8s CronJobs and AWS Lambda are ideal platforms, but they demand lightning-fast startup times
- Native Image shines here: Fast startup times plus simpler dependency trees make migration straightforward
2. Long-Running Microservices → Stick with JVM#
For our always-on services, the JVM remains our go-to choice, but with some smart optimizations:
Storage-Heavy Services#
For microservices managing lots of storage I/O connections, we’re taking a measured approach:
- Skip CRaC for now - too many moving parts with persistent connections
- Enable CDS for faster class loading
- Consider Graal JIT as a drop-in replacement for the C2 compiler
Stateless Services → CRaC All the Way#
For services without heavy storage dependencies - web engines, API gateways, and ad services (with heavy local caching) - CRaC is a game-changer. These are exactly the traffic-sensitive services that need to scale up rapidly when demand spikes hit!
This hybrid approach gives us the best of both worlds: blazing-fast serverless execution for batch workloads and optimized, rapidly-scaling microservices for real-time traffic.