-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
train error I can't handle out #3
Comments
in fact, when I torch.load the 'drqa_param/vocab.pt', the same exception will be reported. Is the pt file wrong? The test code as follow:
|
Have you solved this problem? I have encountered the same problem as you, and I don't know how to solve it. help~ |
@kouhonglady when I change the torchtext version to 0.3.1, another error was reported. So I guess it maybe the version's problem. After that, I was too busy participating in other competitions to continue my investigation. If you solve this problem, please tell me in time, or I will share you my method when I run this code perfect in the near future. I hope so~ |
@kouhonglady @dingjiajie Hi, have you corrected the errors? |
Hello, @kouhonglady @dingjiajie . I have fixed the error in python 3.7. Just install the torchtext==0.4.0, and add the following code into the model_builder.py file. from torchtext import vocab |
This solves OP's issue but triggered another runtime error for me: /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torchtext/data/field.py:359: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). |
just follow the suggestion,use ~musk instead of 1-musk 765276444 Email:[email protected] Signature is customized by Netease Mail Master On 02/28/2020 11:46, Zhili Wang wrote: Hello, @kouhonglady @dingjiajie . I have fixed the error in python 3.7. Just install the torchtext==0.4.0, and add the following code into the model_builder.py file. from torchtext import vocab try: vocab._default_unk_index except AttributeError: def _default_unk_index(): return 0 vocab._default_unk_index = _default_unk_index @AlbertChen1991 This solves OP's issue but triggered another runtime error for me: /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torchtext/data/field.py:359: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). var = torch.tensor(arr, dtype=self.dtype, device=device) Traceback (most recent call last): File "train.py", line 109, in main(opt) File "train.py", line 41, in main single_main(opt, -1) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/train_single.py", line 134, in main valid_steps=opt.valid_steps) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/trainer.py", line 217, in train report_stats, local_step) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/trainer.py", line 348, in _gradient_accumulation outputs, attns, results = self.model(batch, src, history, tgt, src_lengths, history_lengths, bptt=bptt) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/models/model.py", line 70, in forward memory_lengths=src_lengths) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/decoders/decoder.py", line 210, in forward tgt, memory_bank, memory_lengths=memory_lengths) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/decoders/decoder.py", line 388, in run_forward_pass memory_lengths=memory_lengths) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/modules/global_attention.py", line 183, in forward align.masked_fill(1 - mask, -float('inf')) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 394, in rsub return _C._VariableFunctions.rsub(self, other) RuntimeError: Subtraction, the - operator, with a bool tensor is not supported. If you are trying to invert a mask, use the ~ or logical_not() operator instead. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
|
use ~mask instead of 1-mask 765276444 Email:[email protected] Signature is customized by Netease Mail Master On 02/29/2020 08:48, [email protected] wrote: just follow the suggestion,use ~musk instead of 1-musk 765276444 Email:[email protected] Signature is customized by Netease Mail Master On 02/28/2020 11:46, Zhili Wang wrote: Hello, @kouhonglady @dingjiajie . I have fixed the error in python 3.7. Just install the torchtext==0.4.0, and add the following code into the model_builder.py file. from torchtext import vocab try: vocab._default_unk_index except AttributeError: def _default_unk_index(): return 0 vocab._default_unk_index = _default_unk_index @AlbertChen1991 This solves OP's issue but triggered another runtime error for me: /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torchtext/data/field.py:359: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). var = torch.tensor(arr, dtype=self.dtype, device=device) Traceback (most recent call last): File "train.py", line 109, in main(opt) File "train.py", line 41, in main single_main(opt, -1) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/train_single.py", line 134, in main valid_steps=opt.valid_steps) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/trainer.py", line 217, in train report_stats, local_step) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/trainer.py", line 348, in _gradient_accumulation outputs, attns, results = self.model(batch, src, history, tgt, src_lengths, history_lengths, bptt=bptt) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/models/model.py", line 70, in forward memory_lengths=src_lengths) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/decoders/decoder.py", line 210, in forward tgt, memory_bank, memory_lengths=memory_lengths) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/decoders/decoder.py", line 388, in run_forward_pass memory_lengths=memory_lengths) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/Users/zhiliwang/Documents/nlp/ReDR/onmt/modules/global_attention.py", line 183, in forward align.masked_fill(1 - mask, -float('inf')) File "/opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 394, in rsub return _C._VariableFunctions.rsub(self, other) RuntimeError: Subtraction, the - operator, with a bool tensor is not supported. If you are trying to invert a mask, use the ~ or logical_not() operator instead. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
|
@AlbertChen1991 Thanks for your support. After fixing the 1 - mask I got another error as below. Could you suggest any solution? Thank you very much!
|
I run your code in google colab, but I meet a crucial error and I can't deal with it by google, maybe you can help me, thanks.
ps: all packages you mention is installed.
The text was updated successfully, but these errors were encountered: