I am using RabbitMQ version 3.7.17 and Classic Queues Version 1 (CDv1). Publishing message type is set to "persistent". According to RabbitMQ official documentation, in CQv1 (which is applicable here), RabbitMQ starts paging messages (in RAM) to Disk only when a certain paging watermark threshold is breached. (which by default is 50% of the total high watermark i.e total memory assigned to RabbitMQ broker). For eg.The below configuration starts paging at 50% of memory used, and blocks publishers at 40%.
vm_memory_high_watermark_paging_ratio = 0.50
vm_memory_high_watermark.relative = 0.4
However, in the RabbitMQ Management UI page, it is observed that even when the paging ratio threshold is well below the set limits (defaults), there are disk writes happening still parallelly (143/s) at almost same rate as publishing to memory (172/s). Screenshot management UI
The following is not clear here -- Why are disk writes happening when memory paging threshold is still not breached?
Follow-up questions on the same:
- If, let's say, disk write is applicable for every message, then why RAM consumption is also there/high. Can we benefit by removing redundancy and setting lower watermark paging ratio, since disk write is anyway there?
- When consumers connect to this queue, how is the memory managed? To elaborate, lets say there is a single consumer consuming from this queue, with a consumer prefetch count = 1000 set. Then will the rabbit broker maintain 1000 msgs readily available in RAM, even if the applied watermark paging threshold couldn't hold those 1000 msgs? How is RAM-Disk swapping at the broker is managed, taking the consumer connection scenario into account?
Studied a lot of related documents on RabbitMQ official docs , but couldn't find a concrete explaination to these issues. Please share your insights. Kindly keep yours answers relevant to only RabbitMQ v3.7.17 & below using only classic queues v1.