I've been reading about pipelined region scheduling in Flink and am a bit confused about what they mean. My understanding of it is that a Streaming job is always pipelined whereas a Batch job can produce intermediate results that are blocking. This makes sense since an operator can load the entire datastream into memory and process all of it to produce a result that only then can go into the next operator for further processing in case of a Batch job.
The blog post then describes pipelined regions that consists of 4 different regions and has pipelined and blocking data exchanges in the same topology. My question is, how would one go about creating such a job in Flink where it is able to handle both pipelined and blocking data exchanges? A simple code example would be very much appreciated where this capability is showcased.