My input is a array of 64 integers.
model = Sequential()
model.add( Input(shape=(68,), name="input"))
model.add(Conv1D(64, 2, activation="relu", padding="same", name="convLayer"))
I have 10,000 of these arrays in my training set. And I supposed to be specifying this in order for conv1D to work?
I am getting the dreaded
ValueError: Input 0 of layer convLayer is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: [None, 68]
error and I really don't understand what I need to do.
Don't let the name confuse you. The layer
tf.keras.layers.Conv1Dneeds the following shape:(time_steps, features). If your dataset is made of10,000samples with each sample having64values, then your data has the shape(10000, 64), which is not directly applicable to thetf.keras.layers.Conv1Dlayer. You are missing thetime_stepsdimension. What you can do is use thetf.keras.layers.RepeatVector, which repeats your array inputntimes, in the example5. This way yourConv1Dlayer gets an input of the shape(5, 64). Check out the documentation for more information:As a side note, you should ask yourself if using a
tf.keras.layers.Conv1Dlayer is the right option for your use case. This layer is usually used for NLP and other time series tasks. For example, in sentence classification, each word in a sentence is usually mapped to a high-dimensional word vector representation, as seen in the image. This results in data with the shape(time_steps, features).If you want to use character one hot encoded embeddings it would look something like this:
This is a simple example of one single sample with the shape
(10, 10)--> 10 characters along the time series dimension and 10 features. It should help you understand the tutorial I mentioned a bit better.