Our service runs on JDK17, which is a simple storage query service without complex logic. Sometimes its rpc interface's TP9999 may experiences a surge (from tens of milliseconds to several seconds or even more than ten seconds), accompanied by 100% CPU utilization.
This situation often suddenly happens on the stable running machines, and after several seconds it will automatically recover.
We have found something at the abnormal moments by JFR:
There is always JIT deoptimization and recompilation occurring which were concentrated on JSON deserialization method(our main business logic is JSON deserialization).
Sometimes it is the C2 compiler thread that has excessive CPU usage, while at other times it is the business threads that consume a high amount of CPU.
I guess the JIT deoptimization may cause the interface's performance decreasing. So what should I do for fixing or reducing this performance problem?
I solved this problem just by removing the branch prediction logic of the OpenJDK: