Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jittor.nn.Embedding的文档中说有padding_idx,但并没有实现 #589

Open
PhyllisJi opened this issue Sep 6, 2024 · 5 comments
Open

Comments

@PhyllisJi
Copy link

Describe the bug

jittor 1.3.7没有实现padding_idx

Full Log

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[5], line 61
     54     y = m(x)
     55     return list(y.shape)
---> 61 go()

Cell In[5], line 53, in go()
     51 jittor.flags.use_cuda = 1
     52 x = jittor.randn([1, 3, 224, 224])
---> 53 m = alexnet()
     54 y = m(x)
     55 return list(y.shape)

Cell In[5], line 27, in alexnet.__init__(self)
     25 self.pool3_mutated = jittor.nn.MaxPool2d(kernel_size=2, stride=4, padding=4, ceil_mode=True, return_indices=False)
     26 self.avgpool_mutated = jittor.nn.ReflectionPad2d(padding=1)
---> 27 self.flatten_mutated = jittor.nn.Embedding(embedding_dim=1, num_embeddings=5, padding_idx=8)

TypeError: __init__() got an unexpected keyword argument 'padding_idx'

Minimal Reproduce

import os
os.environ["disable_lock"] = "1"
import jittor
import jittor.nn as nn
import jittor.optim as optim
import numpy as np
import copy


class alexnet(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1_mutated = jittor.nn.ConvTranspose2d(in_channels=3, kernel_size=11, out_channels=64)
        self.relu1_mutated = jittor.nn.Softmax()
        self.pool1_mutated = jittor.nn.ReplicationPad2d(padding=8)
        self.conv2_mutated = jittor.nn.PixelShuffle(upscale_factor=1)
        self.relu2_mutated = jittor.nn.PReLU()
        self.pool2_mutated = jittor.nn.MaxPool2d(kernel_size=3, stride=2, return_indices=False, ceil_mode=True)
        self.conv3_mutated = jittor.nn.Conv2d(in_channels=64, out_channels=384, kernel_size=3, padding=1, stride=8, groups=1, bias=False, dilation=(1, 1))
        self.relu3_mutated = jittor.nn.ReLU()
        self.conv4_mutated = jittor.nn.Sigmoid()
        self.relu4_mutated = jittor.nn.ReLU6()
        self.conv5_mutated = jittor.nn.AvgPool2d(kernel_size=(7, 1), stride=(2, 4))
        self.relu5_mutated = jittor.nn.ReLU()
        self.pool3_mutated = jittor.nn.MaxPool2d(kernel_size=2, stride=4, padding=4, ceil_mode=True, return_indices=False)
        self.avgpool_mutated = jittor.nn.ReflectionPad2d(padding=1)
        self.flatten_mutated = jittor.nn.Embedding(embedding_dim=1, num_embeddings=5, padding_idx=8)
    
    def execute(self, x):
        x = self.conv1_mutated(x)
        x = self.relu1_mutated(x)
        x = self.pool1_mutated(x)
        x = self.conv2_mutated(x)
        x = self.relu2_mutated(x)
        x = self.pool2_mutated(x)
        x = self.conv3_mutated(x)
        x = self.relu3_mutated(x)
        x = self.conv4_mutated(x)
        x = self.relu4_mutated(x)
        x = self.conv5_mutated(x)
        x = self.relu5_mutated(x)
        x = self.pool3_mutated(x)
        x = self.avgpool_mutated(x)
        x = self.flatten_mutated(x)
        return x




def go():
    jittor.flags.use_cuda = 1
    x = jittor.randn([1, 3, 224, 224])
    m = alexnet()
    y = m(x)
    return list(y.shape)

go()
@luozhiya
Copy link

jittor.nn.Embeddingpadding_idx 参数需要版本 >=1.3.8.0,官方文档现在是基于 1.3.9.2,建议您升级版本再试一下。

@PhyllisJi
Copy link
Author

jittor.nn.Embeddingpadding_idx 参数需要版本 >=1.3.8.0,官方文档现在是基于 1.3.9.2,建议您升级版本再试一下。

由于一些原因我现在无法升级版本,我从哪里可以查看以前版本的文档?

@luozhiya
Copy link

@PhyllisJi 官网文档没找到版本切换,如果需要查看以前的文档,也可以看 API 对应代码注释(文档是自动从代码中生成的)

@PhyllisJi
Copy link
Author

PhyllisJi commented Sep 11, 2024

@PhyllisJi 官网文档没找到版本切换,如果需要查看以前的文档,也可以看 API 对应代码注释(文档是自动从代码中生成的)

再请问一下,为什么文档里没有jittor.nn.AdaptiveMaxPool3d?
#484

@luozhiya
Copy link

大概是文档生成模板中没有添加 AdaptiveMaxPool3d 引起的

#597

https://cg.cs.tsinghua.edu.cn/jittor/assets/docs/_modules/jittor/pool.html#AdaptiveMaxPool3d

class AdaptiveMaxPool3d(Module):
    '''
    对输入进行三维自适应平均池化处理的类。

        参数: 
            - output_size (int, tuple, list) : 期望的输出形状。
            - return_indices (bool): 若为True, 则将最大值索引值和输出一起返回。

        形状:
            - 输入: :math:`[N, C, D, H, W]`
            - 输出: :math:`[N, C, S_0, S_1, S_2]`, 此处  (S_0, S_1, S_2) = ``output_size`` 。

        属性:
            - output_size (int, tuple, list) : 期望的输出形状。
            - return_indices (bool) : 若为True, 则将最大值索引值和输出一起返回。
        
        代码示例:
            >>> # target output size of 5x7x9
            >>> m = nn.AdaptiveMaxPool3d((5, 7, 9))
            >>> input = jt.randn(1, 64, 8, 9, 10)
            >>> output = m(input)
            >>> # target output size of 7x7x7 (cube)
            >>> m = nn.AdaptiveMaxPool3d(7)
            >>> input = jt.randn(1, 64, 10, 9, 8)
            >>> output = m(input)
            >>> # target output size of 7x9x8
    '''

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants