Neural net giving different outputs for same input

27 Views Asked by At

I have neural network class that I have trained already and saved the weights after pickling. It is a generator neural network (as in GANs) where it takes a multivariate gaussian (noise) vector as input and outputs a vector that is of interest (parameter vector). Here is the declaration of the class:

class MLP():
    def __init__(self, cond_class, min_x, max_x, n_samples, NAMES_km, param_fixing,noise_input):
        self.latent_dim = int(configs['MLP']['latent_dim'])
        print(self.latent_dim)
        self.cond_class = cond_class
        self.min_x = min_x
        self.max_x = max_x
        self.label_shape = 1
        self.param_fixing = param_fixing
        self.names_km = NAMES_km
        self.noise = noise_input

        self.n_samples = n_samples
        self.n_parameters = int(configs['MLP']['no_kms'])
        self.param_shape = self.n_parameters

        optimizer = Adam(0.0002, 0.5)

        # Build and compile the generator
        self.generator = self.build_generator()
        self.generator.compile(loss=['binary_crossentropy'],
                               optimizer=optimizer)
        # self.generator.summary()
        # The generator takes noise and the target label as input
        # and generates the corresponding digit of that label
        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(self.label_shape,))
        param = self.generator([noise, label])

    def build_generator(self):

        # get architecture from config
        layer_1 = int(configs['MLP']['layer_1'])
        layer_2 = int(configs['MLP']['layer_2'])
        layer_3 = int(configs['MLP']['layer_3'])

        model = Sequential()
        model.add(Dense(layer_1, input_dim=self.latent_dim + self.label_shape))
        model.add(LeakyReLU(alpha=0.2))

        model.add(Dense(layer_2))
        model.add(LeakyReLU(alpha=0.2))

        model.add(Dense(layer_3))
        model.add(LeakyReLU(alpha=0.2))

        model.add(Dense(self.n_parameters))  # , activation = 'tanh'))

        noise = Input(shape=(self.latent_dim,))
        label = Input(shape=(self.label_shape,))

        model_input = concatenate([noise, label])

        param = model(model_input)

        return Model([noise, label], param)

    def sample_parameters(self):

        # we just try to generate stable models
        
        sampled_labels = np.ones(self.n_samples).reshape(-1, 1) * self.cond_class

        if(len(self.noise) == 0):
            self.noise = np.random.normal(0, 1, (self.n_samples, self.latent_dim))
        
        gen_par = self.generator.predict([self.noise , sampled_labels])
        # Rescale parameters according to previous scaling on X_train
        x_new, new_min, new_max = hp.unscale_range(gen_par, np.min(gen_par), np.max(gen_par), self.min_x, self.max_x)

        if self.param_fixing:
            return self.param_fixer(x_new),self.noise 
        else:
            return x_new,self.noise 

Now when I try to take a saved input vector (a noise file) and a saved weight (a .pkl file) and load the weights and try to recreate the output like so,

old_noise = np.load('noise_0.npy')
path_to_weights = 'weights_0.pkl'
new_noise = []
idx_to_choose = 20
n_sets = 10
for _ in range(n_sets):
    new_noise.append(old_noise[idx_to_choose])    
new_noise = np.array(new_noise)


mlp = MLP(cond_class, lnminkm, lnmaxkm, n_sets, names_km,pf_flag, new_noise)

# Load saved weights and generate
opt_weights = hp.load_pkl(path_to_weights)
mlp.generator.set_weights(opt_weights)
gen_params, _ = mlp.sample_parameters()

the output vector (generated parameter vector) is the not the same as the previous one. Moreover, even though the same input noise is being used for n_sets number of times I get weird behavior in some of the components of the output vector, for e.g,

enter image description here enter image description here

overall, 10 among the 274 components of the output vector have mild oscillations for the same noise input while others dont.

I have tried several different weights from the same and different repeats of training but the issue seems to be persistent. I have also tried different conda environments and dockers but to no success. Any help would be appreciated.

0

There are 0 best solutions below