Expected record(msg) read rate is not met with spark streaming config (maxRatePerPartition,number of executor,backpressure), for the spark job interval of 5mins. Tried increasing the offset read, but still the read rate comes back to lower number of records, eventhough the available records in the topic is of higher number.
maxRatePerPartition = 1000 spark streaming job runs every 5minutes(300 sec), to pull data from kafka topic with partition count of 15. number of executor=15 with core per executor as 1 and spark.streaming.backpressure enabled.
Configured record(msg) read rate be => 1000 * 15 * 300 = 4.5 Million records Actual record(msg) read rate => 1.05 Million records (70k records read per partition)
Although the record(msg) count in the per partition is around 135K