Error message ' No Algorithm worked' in CNN using Tensorflow2 and NVIDIA RTX 2080 Max-Q

2.9k Views Asked by At

I have used standard code to download MNIST_Fashion dataset and run a CNN, using Tensorflow 2 (2.3.1) and Keras (2.4.0). The code works fine on a normal laptop without GPU. However, on a laptop with NVIDIA RTX 2080 Max-Q I get error message: 'No algorithm worked!'.

Duo you have any suggestions how to run the code on laptop with GPU?

The code I have used:

from __future__ import absolute_import, division, print_function, unicode_literals
from tensorflow import keras as ks
   
fashion_mnist = ks.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

training_images = training_images / 255.0
test_images = test_images / 255.0
training_images = training_images.reshape(60000, 28, 28, 1)
test_images = test_images.reshape(10000, 28, 28, 1)

cnn_model = ks.models.Sequential()
cnn_model.add(ks.layers.Conv2D(50, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1), name='Conv2D_l'))
cnn_model.add(ks.layers.MaxPooling2D((2, 2), padding='same', name='MaxPooling_2D'))
cnn_model.add(ks.layers.Flatten(name='Flatten'))
cnn_model.add(ks.layers.Dense(50, activation='relu', name='Hidden_layer'))
cnn_model.add(ks.layers.Dense(10, activation='softmax', name='Output_layer'))

cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

cnn_model.fit(training_images, training_labels, epochs=100)
2

There are 2 best solutions below

2
Frightera On

Providing the full error message might be more useful next time.

I assume, adding these lines might solve your issue:

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
0
trazoM On

I am running on Ubuntu, apart from what Frightera said above which I would always add something similar:

gpu_devices = tf.config.experimental.list_physical_devices('GPU')
for device in gpu_devices: tf.config.experimental.set_memory_growth(device, True)

I would usually free my GPU memory buy killing the python processes I ran previously.

Ctrl + Alt + T to open the terminal:

sudo fuser -v /dev/nvidia*

A table will emerge, then do

sudo kill -9 <PID number>

where < PID number > is the number corresponding to the python process seen in the table.

After this, go and rerun your code and be happy.