You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi~,
I notice that local feature (lf) variable is different in training phase and test phase. Specifically, in "AlignedReID/models/ResNet.py",
if not self.training:
lf = self.horizon_pool(x)
if self.aligned and self.training:
lf = self.bn(x)
lf = self.relu(lf)
lf = self.horizon_pool(lf)
lf = self.conv1(lf)
if self.aligned or not self.training:
lf = lf.view(lf.size()[0:3])
lf = lf / torch.pow(lf,2).sum(dim=1, keepdim=True).clamp(min=1e-12).sqrt()
when training, 'lf' is 'self.horizon_pool+self.conv1', but during test, 'lf' is only 'self.horizon_pool'. Wouldn't this mismatch between the training phase and the testing phase make the results worse?
The text was updated successfully, but these errors were encountered:
Hi~,
I notice that local feature (lf) variable is different in training phase and test phase. Specifically, in "AlignedReID/models/ResNet.py",
when training, 'lf' is 'self.horizon_pool+self.conv1', but during test, 'lf' is only 'self.horizon_pool'. Wouldn't this mismatch between the training phase and the testing phase make the results worse?
The text was updated successfully, but these errors were encountered: