Imagine we have a vector of continuous transformations from a process [u_1, u_2, ..., u_n] for a set of inputs [p_1, p_2, ..., p_n], i.e. f(p_i) = u_i. We next develop another set of inputs [q_1, q_2, ..., q_m] where m not necessarily equals n and get observations [v_1, v_2, ..., v_m]. Problem is that the process f is somewhat stochastic i.e. the hidden states of f may change in some unpredictable way. Ergo, two different calls on the same input may result in two somewhat different vector of results. An easy way to think of this is that f computes a deterministic result but adds some random noise to that result before returning it. Is there a good non-parametric way to then scale the second set of output to make it comparable to the first?
One naïve approach would to be concatenate the first and second input and run the second call so that the transformations on both p and q are on the same scale and results are comparable. But imagine if I was doing this in a loop, and the function call was expensive, then I would have to figure out a good selection of inputs from earlier iterations to ensure that transformation results from my current iteration was comparable to previous iterations?