We have a spring boot application which is designed specifically to handle spring batch jobs. Here we are using spring batch partitioner approach. This approach was chosen as we needed the resumability/restartability of a batch on failure in a single node. Reference : Restarting a job after server crash is not processing the unprocessed data.
Is there a way we can make one job which has to process huge number of records share the load between nodes?
You can create a job with a remotely partitioned step where each worker step is assigned a partition of the data and is deployed on a different node.
You can find an example here.