Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ParallelTable with Sequencer does not seem to work #431

Open
vguptai opened this issue May 25, 2018 · 2 comments
Open

ParallelTable with Sequencer does not seem to work #431

vguptai opened this issue May 25, 2018 · 2 comments

Comments

@vguptai
Copy link

vguptai commented May 25, 2018

Hi,

I am trying ParallelTable with Sequencer but not able to get desired result. I understand that I am not passing the input to forward in the correct way. How should I pass this data?

One way is to stack the input data together side by side before passing to the model and then split it inside the model. But I am not able to find the module which can do that. Basically, I would need to pass a tensor of size (50,16,1027) to the sequencer and inside the sequencer have a module which can split it back again.

Here is the code:

local a = torch.randn(50,16,1024)
local b = torch.randn(50,16,3)
local m1 = nn.Sequential()
local parallel_table = nn.ParallelTable()
parallel_table:add(nn.Linear(1024, 1024))
parallel_table:add(nn.Linear(3, 20))
m1:add(parallel_table)
m1 = nn.Sequencer(m1)
m1:forward({a,b})

Following is the error that I get:

In 1 module of nn.Sequential:
In 2 module of nn.ParallelTable:
/usr/local/torch/install/share/lua/5.1/nn/Linear.lua:66: size mismatch, m1: [16 x 1024], m2: [3 x 20] at /usr/local/torch/pkg/torch/lib/TH/generic/THTensorMath.c:1293
@tastyminerals
Copy link
Contributor

You are feeding tensors of incompatible sizes. ParallelTable does not do resizing for you.

@vguptai
Copy link
Author

vguptai commented May 28, 2018

@tastyminerals Thanks for your response. Which dimension/tensor are wrong in size? Which dimension should I change to avoid the crash?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants