if i use encoder = PretrainedTransformerEmbedder(model_name, sub_module="encoder") as the encoder to pass to Bart(encoder=encoder), it reports error because it doesn't implement get_input_dim(), if i pass encoder = PretrainedTransformerEmbedder(model_name, sub_module="encoder"), encoder = encoder.encoder as the input encoder like metioned here, it reports error because the PretrainedTransformerEmbedder(model_name, sub_module="encoder") doesn't has an attribute encoder.
So how can i use the full bart model(including token_embed, position_embed) for seq2seq task in allennlp?
How to use Bart with PretrainedTransformerEmbedder?
135 Views Asked by TianHongZXY At
1
If you just pass
encoder=None(which is the default), theBartmodel will use the native BART encoder. It sounds like that's what you want?