I am trying to run a code that was used with older versions of torch and torchtext. I have adjusted a lot in the code to make it work. I was able to pre-process and train my data. Lastly I tried to run the test script, after solving multiple errors, I am getting this error:
Batch size > 1 not implemented! Falling back to batch_size = 1 ...
Traceback (most recent call last):
File "translate_mm.py", line 166, in <module>
main()
File "translate_mm.py", line 84, in main
onmt.ModelConstructor.load_test_model(opt, dummy_opt.__dict__)
File "/onmt/ModelConstructor.py", line 145, in load_test_model
checkpoint['vocab'], data_type=opt.data_type)
File "/onmt/io/IO.py", line 57, in load_fields_from_vocab
fields = get_fields(data_type, n_src_features, n_tgt_features)
File "/onmt/io/IO.py", line 43, in get_fields
return TextDataset.get_fields(n_src_features, n_tgt_features)
File "/onmt/io/TextDataset.py", line 218, in get_fields
postprocessing=make_src, sequential=False)
TypeError: __init__() got an unexpected keyword argument 'tensor_type'
I have tried downgrading to older versions of PyTorch, however when doing this I get a ModuleError namely:
ModuleNotFoundError: No module named 'torchtext.legacy'
I have also tried running it on Anaconda, with proper pytorch and torchtext versions according to requirements, but there I get an entirely different error:
import torch._dl as _dl_flags ImportError: No module named _dl
I just need to test the data at this point, everything else seems to have worked out. Any help would be greatly appreciated.
-U
The older versions of torchtext do not have a legacy module, so if you remove that part of the call that should fix the error.
i.e.
torchtext.legacy.___->torchtext.___