I can't fit my tflearn based model keeps on giving me error "list index out of range"

29 Views Asked by At

I tried to make a tflearn based model , when i tried to fit the model it gave me the error below , i tried lowering batch size and number of epochs but it still isn't working so i'm wondering what went wrong , i checked the dimensions too of my input and output layer but i don't know what is creating this problem by the way length of my training and output set is 26 and each element of training set is of length 46 (numpy array) and length of each output element is 6(numpy array): Error :

IndexError                                Traceback (most recent call last)
Cell In[43], line 1
----> 1 model.fit(training, output ,n_epoch=10,batch_size=8,show_metric=True)
      2 model.save('chatbot.tflearn')

File d:\Desktop AI\env\Lib\site-packages\tflearn\models\dnn.py:183, in DNN.fit(self, X_inputs, Y_targets, n_epoch, validation_set, show_metric, batch_size, shuffle, snapshot_epoch, snapshot_step, excl_trainops, validation_batch_size, run_id, callbacks)
    178         valY = validation_set[1]
    180 # For simplicity we build sync dict synchronously but Trainer support
    181 # asynchronous feed dict allocation.
    182 # TODO: check memory impact for large data and multiple optimizers
--> 183 feed_dict = feed_dict_builder(X_inputs, Y_targets, self.inputs,
    184                               self.targets)
    185 feed_dicts = [feed_dict for i in self.train_ops]
    186 val_feed_dicts = None

File d:\Desktop AI\env\Lib\site-packages\tflearn\utils.py:300, in feed_dict_builder(X, Y, net_inputs, net_targets)
    298         X = [X]
    299     for i, x in enumerate(X):
--> 300         feed_dict[net_inputs[i]] = x
    301 else:
    302     # If a dict is provided
    303     for key, val in X.items():
    304         # Copy to feed_dict if dict already fits {placeholder: data} template

IndexError: list index out of range

The code :

stemmer = LancasterStemmer()


with open('D:\Desktop AI\Wednesday\chat\intents.json') as file :
    data = json.load(file)

words =[]
labels = []
docs_x = []
docs_y = []

for intent in data['intents']:
    for pattern in intent['patterns']:
        wrd = nltk.word_tokenize(pattern)
        words.extend(wrd)
        docs_x.append(wrd)
        docs_y.append(intent['tag'])
    if intent['tag'] not in labels:
        labels.append(intent['tag'])

words = [stemmer.stem(w.lower())for w in words if w not in "?"]
words = sorted(list(set(words)))

labels = sorted(labels)

training=[]
output=[]

out_empty = [0 for _ in range(len(labels))]

# creating a bag of words using one hot encoding
for x,doc in enumerate(docs_x):
    bag=[]
    
    wrds = [stemmer.stem(w) for w in doc]
    
    for w in words:
        if w in wrds:
            bag.append(1)
        else:
            bag.append(0)
    output_row= out_empty[:]
    output_row[labels.index(docs_y[x])]=1
    
    training.append(bag)
    output.append(output_row)
    
training=np.array(training)
output=np.array(output)

# neural layer

net = tflearn.input_data(shape=[None,len(training[0])])
net = tflearn.fully_connected(net,8)
net = tflearn.fully_connected(net,8)
net = tflearn.fully_connected(net,len(output[0]),activation='softmax')
net = tflearn.regression(net)

model=tflearn.DNN(net)
model.fit(training, output ,n_epoch=10,batch_size=8,show_metric=True)
model.save('chatbot.tflearn')
0

There are 0 best solutions below