I'm building a face recognition model (FaceNet), for that as you may know we have to generate triplets, I'm getting error in my implementation, if any changes in below code or a better implementation please answer.
The below is the code for generating triplets for FaceNet model
# since we have very less data we don't use the constraints given in the original paper
def select_all_triplets(images, labels):
batch_size = len(labels)
pos_images = [] # stores positive images
neg_images = [] # stores negative images
for i in range(batch_size):
anchor_label = labels[i]
pos_list = [] # stores indices of pos images
neg_list = [] # stores indices of neg images
for j in range(batch_size):
if j != i: # ∴ len(pos_list) + len(neg_list) = len(images) - 1
if labels[j] == anchor_label:
pos_list.append(j)
else:
neg_list.append(j)
pos_images.append(tf.gather(images, pos_list))
neg_images.append(tf.gather(images, neg_list))
print(pos_list, neg_list)
positive_images = tf.random.shuffle(tf.stack(pos_images))
negative_images = tf.random.shuffle(tf.stack(neg_images))
return positive_images, negative_images
#testing
img, lbl = next(iter(train_ds))
select_all_triplets(img, lbl)
but getting error as
InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [7,96,96,3] != values[2].shape = [6,96,96,3] [Op:Pack] name: stack
so thought of using ragged tensor but not getting how. If there are other approaches for the code please list it
To overcome the issue of the inconsistent shapes, maybe you could use RaggedTensors specifically
tf.ragged.stackto store the variable-length lists of positive and negative images before usingtf.random.shuffle: