Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest][task 337] reformat example code with google style in python/paddle/base/framework.py #57151

Merged
merged 6 commits into from
Sep 11, 2023

Conversation

ooooo-create
Copy link
Contributor

@paddle-bot
Copy link

paddle-bot bot commented Sep 10, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Sep 10, 2023
Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI 里看起来还有别的失败的,也需要看下~

@ooooo-create
Copy link
Contributor Author

image
from paddle.base.dygraph import Linear,这个有Linear吗

@SigureMo
Copy link
Member

CI 里看起来还有别的失败的,也需要看下~

咦,之前 review 的消息丢了

from paddle.base.dygraph import Linear,这个有Linear吗

paddle.nn.Linear

@ooooo-create
Copy link
Contributor Author

咦,之前 review 的消息丢了
是丢了诶~

Comment on lines 1004 to 1034
>>> import paddle
>>> paddle.enable_static()
>>> with paddle.static.name_scope("s1"):
... a = paddle.static.data(name='data', shape=[None, 1], dtype='int32')
... b = a + 1
>>> with paddle.static.name_scope("s2"):
... c = b * 1
>>> with paddle.static.name_scope("s3"):
... d = c / 1
>>> with paddle.static.name_scope("s1"):
... f = paddle.tensor.pow(d, 2.0)
>>> with paddle.static.name_scope("s4"):
... g = f - 1

>>> # Op are created in the default main program.
>>> for op in paddle.static.default_main_program().block(0).ops:
... # elementwise_add is created in /s1/
... if op.type == 'elementwise_add':
... assert op.desc.attr("op_namescope") == '/s1/'
... # elementwise_mul is created in '/s1/s2'
... elif op.type == 'elementwise_mul':
... assert op.desc.attr("op_namescope") == '/s1/s2/'
... # elementwise_div is created in '/s1/s3'
... elif op.type == 'elementwise_div':
... assert op.desc.attr("op_namescope") == '/s1/s3/'
... # elementwise_sum is created in '/s4'
... elif op.type == 'elementwise_sub':
... assert op.desc.attr("op_namescope") == '/s4/'
... # pow is created in /s1_1/
... elif op.type == 'pow':
... assert op.desc.attr("op_namescope") == '/s1_1/'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这段代码运行没什么问题,但实际上,很多 elif 都没跑到 ~

如果把 op 打印出来的话(print('>>>', op, op.type)),是这样:

>>> {Out=['tmp_0']} = scale(inputs={ScaleTensor=[], X=['data']}, bias = 1.0, bias_after_scale = True, op_device = , op_namescope = /s1/, op_role = 0, op_role_var = [], scale = 1.0, with_quant_attr = False) scale
>>> {Out=['tmp_1']} = scale(inputs={ScaleTensor=[], X=['tmp_0']}, bias = 0.0, bias_after_scale = True, op_device = , op_namescope = /s2/, op_role = 0, op_role_var = [], scale = 1.0, with_quant_attr = False) scale
>>> {Out=['tmp_2']} = cast(inputs={X=['tmp_1']}, in_dtype = 2, op_device = , op_namescope = /s3/, op_role = 0, op_role_var = [], out_dtype = 5, use_mkldnn = False, with_quant_attr = False) cast
>>> {Out=['tmp_3']} = scale(inputs={ScaleTensor=[], X=['tmp_2']}, bias = 0.0, bias_after_scale = True, op_device = , op_namescope = /s3/, op_role = 0, op_role_var = [], scale = 1.0, with_quant_attr = False) scale
>>> {Out=['pow_0.tmp_0']} = pow(inputs={FactorTensor=[], X=['tmp_3']}, factor = 2.0, op_device = , op_namescope = /s1_1/, op_role = 0, op_role_var = [], with_quant_attr = False) pow
>>> {Out=['tmp_4']} = scale(inputs={ScaleTensor=[], X=['pow_0.tmp_0']}, bias = -1.0, bias_after_scale = True, op_device = , op_namescope = /s4/, op_role = 0, op_role_var = [], scale = 1.0, with_quant_attr = False) scale

@SigureMo paddle 的这个流程是不是改了?这里算是遗留代码?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

首先有些地方缩进层级少了,需要改一下 @ooooo-create

@SigureMo paddle 的这个流程是不是改了?这里算是遗留代码?

if isinstance(other_var, float):
# in all cases(+, -, *, /, **, //, %), we need cast tensor.dtype to float
if self.dtype in _supported_int_dtype_:
self = astype(self, 'float32')
# here use `scale` replace `elementwise` to get better performance
# but only +, -, *, / can use this method
if scalar_method is not None:
return scalar_method(self, other_var)
elif isinstance(other_var, int):
# in all cases(+, -, *, /, **, //, %), we can cast it to float
# because the output tensor.dtype depend on the type of input tensor
other_var = float(other_var)
# division is a special case
# NOTE(chenweihang): because we cast tensor to float32 instead float64,
# the division result can only guarantee the numerical accuracy of 6 digits
# after the decimal point. The result of numpy calculation is of float64 type,
# so the calculation result here and the calculation result of numpy are
# different after 6 decimal point. If necessary, we can also use float64 here.
# torch's behavior here is consistent with ours
if (
op_type == 'elementwise_div'
and self.dtype in _supported_int_dtype_
):
self = astype(self, 'float32')
# here use `scale` replace `elementwise` to get better performance
# but only +, -, *, / can use this method
if scalar_method is not None:
return scalar_method(self, other_var)
else:
# do nothing
pass

我看了下这部分代码,在 rhs 为 int、float 时就会变成 scale op,不过这已经是 3 年前的代码了

但这段示例代码貌似历史更加悠久,貌似是 4 年前的……

根据 rhs 只有 scalar 才会变成 scale op,我们可以改成 b = a + paddle.to_tensor(1),就可以确保是 elementwise_add op 了

Comment on lines 1630 to 1631
... loss2 = paddle.sum(ret2)
... loss2.backward()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

加一个 retain_grads() 吧 ~ 不然这 print 没啥意义 ... ...

loss2 = paddle.sum(ret2)
loss2.retain_grads()
loss2.backward()

Comment on lines 1678 to 1682
... loss2 = paddle.sum(ret2)
... loss2.backward()
... print(loss2.gradient())
... loss2.clear_gradient()
... print("After clear {}".format(loss2.gradient()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,加 retrain_grads()

另外,这个 print 的输出可以写出来 ~

@@ -1664,7 +1665,7 @@ def clear_gradient(self):
.. code-block:: python

>>> import paddle
>>> import paddle.base as base
>>> import paddle.fluid as base
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fluid?

@@ -1006,10 +1006,10 @@ def name_scope(prefix=None):
>>> with paddle.static.name_scope("s1"):
... a = paddle.static.data(name='data', shape=[None, 1], dtype='int32')
... b = a + 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里仍然使用 scalar 的话,基本不会走下面的分支的

@@ -1005,7 +1005,7 @@ def name_scope(prefix=None):
>>> paddle.enable_static()
>>> with paddle.static.name_scope("s1"):
... a = paddle.static.data(name='data', shape=[None, 1], dtype='int32')
... b = a + 1
... b = a + paddle.to_tensor(1)
... with paddle.static.name_scope("s2"):
... c = b * 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

下面这几个也需要改哦

Comment on lines 7540 to 7543
>>> #print the number of blocks in the program, 1 in this case
>>> print(paddle.static.default_main_program().num_blocks) # 1
>>> #print the default_main_program
>>> print(paddle.static.default_main_program())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
>>> #print the number of blocks in the program, 1 in this case
>>> print(paddle.static.default_main_program().num_blocks) # 1
>>> #print the default_main_program
>>> print(paddle.static.default_main_program())
>>> # print the number of blocks in the program, 1 in this case
>>> print(paddle.static.default_main_program().num_blocks) # 1
>>> # print the default_main_program
>>> print(paddle.static.default_main_program())

... with paddle.static.name_scope("s3"):
... d = c / 1
... d = c / paddle.to_tensor(1)
>>> with paddle.static.name_scope("s1"):
... f = paddle.tensor.pow(d, 2.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个也要 😂

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😊是的,新的改好啦~

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow 🐾

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Sep 11, 2023
@luotao1 luotao1 closed this Sep 11, 2023
@luotao1 luotao1 reopened this Sep 11, 2023
@luotao1 luotao1 merged commit 8d84b72 into PaddlePaddle:develop Sep 11, 2023
@ooooo-create ooooo-create deleted the ooooo/xdoctest337 branch September 23, 2023 05:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants