For example, I would like to have a standard feed-forward neural network with the following structure:
- n input neurons
- n neurons on the second layer
- 2 neurons on the third layer
- n neurons on the fourth layer
where
- the i-th neuron in the first layer is connected precisely to the i-th neuron in the second layer (don't know how to do that)
- the second and the third layer are fully connected, the same goes for the third and the fourth layer (I know how to do that - using nn.Linear)
- loss function is MSE + L1 norm of the (vector of) weights between the first two layers (depends on the solution of the question whether I can do that)
Motivation: I want to implement an autoencoder and try to achieve some sparsity (this is why the inputs are multiplied by a single weight (going from the first to the second layer)).
You can implement a custom layer, similar to
nn.Linear:and use it like this:
Of course you make make things a lot simpler than that :)