Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

What's the exact Pytorch and Torchtext version for your code? I am trying to downgrade to a previous version in order to avoid the Multi30k.split() problem but failed. #12

Open
yaoyiran opened this issue Oct 16, 2018 · 5 comments

Comments

@yaoyiran
Copy link

What's the exact Pytorch and Torchtext version for your code? I am trying to downgrade to a previous version in order to avoid the Multi30k.split() problem but failed.

@keon
Copy link
Owner

keon commented Oct 16, 2018

I can't remember.. sorry
I need to re-write this in PyTorch v1.0

@yaoyiran
Copy link
Author

Here is another bug that I met for the code, do you have any idea on how to revise it?

/workspace/seq2seq/model.py:47: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
energy = F.softmax(self.attn(torch.cat([hidden, encoder_outputs], 2)))
Traceback (most recent call last):
File "train.py", line 112, in
main()
File "train.py", line 94, in main
en_size, args.grad_clip, DE, EN)
File "train.py", line 52, in train
output = model(src, trg)
File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/workspace/seq2seq/model.py", line 105, in forward
output, hidden, encoder_output)
File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/workspace/seq2seq/model.py", line 75, in forward
attn_weights = self.attention(last_hidden[-1], encoder_outputs)
File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/workspace/seq2seq/model.py", line 43, in forward
return F.relu(attn_energies, dim=1).unsqueeze(1)
TypeError: relu() got an unexpected keyword argument 'dim'

@pskrunner14
Copy link
Contributor

pskrunner14 commented Oct 16, 2018

@yaoyiran you might wanna remove the dim keyword arg from relu and also add dim=2 in the softmax and see if that resolves the issue. What version of pytorch are you using?

@Linao1996
Copy link
Contributor

after upgrading torchtext to 0.3x,you can solve this problem.
I used to use 0.23, and ran into the same problem as you.

@lorybaby
Copy link

lorybaby commented Feb 4, 2019

@pskrunner14
why not update your code in git. I also use your attention model and get the same error.
I use the pytorch 1.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants