Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update #2

Open
wants to merge 2,685 commits into
base: master
Choose a base branch
from
Open

Update #2

wants to merge 2,685 commits into from

Conversation

Deliadd
Copy link
Owner

@Deliadd Deliadd commented Nov 12, 2018

No description provided.

wzhongyuan and others added 30 commits December 25, 2018 16:32
* add static graph to IR graph

* meet pr comments
…n issue by docker (#2713)

* make the core number fixed

* fix local predictor
…ltiple times (#2648)

* reserve optimMethod for each worker

* add valdiation throughput

* cache variable previousOptim
Because if we use the parent thread directly, there will be two bugs,
1. The child threads forked from parent thread will be bound to core 0
because of the affinity settings.
2. The native thread has some unknown thread local variables. So if
the parent thread exits and is recreated, such as the thread from
Executors.newFixedThreadPool. The whole app will be segment fault.
The parent thread means the main thread (Local Mode) or worker thread of
mapPartition (Distributed Mode).
* add ceilMode for Pooling & fix batchNorm evaluate

* add training status for dnn layer

* fix comments
* fix IRGraph init & Add regualizer

* meet review comments
There're two issues,

1. the padding tensor required. mkl-dnn will use a padding tensor which
    will use more memory, such as 4x1x28x28 to 4x8x28x28(avx2). It will
    pad to times of simd width.
2. the TensorMMap between DenseTensor and DnnTensor. Previous impl
    will allocate DnnTensor when model is created, which will cost too much
    space. So this patch will allocate it at runtime.
…2740)

* add computshape for some layer and add skip primitives in DnnGraph

* meet pr comments
* Modify documentation

* Modify documentation 2

* 修改了环境配置文档
* add auto fusion in dnn graph
* refactor predict for dnn model
* Modify documentation

* Modify documentation 2

* 修改了环境配置文档

* Corrected some mistakes in the API Guide

* Update learning rate scheduler doc.

* Fix the Bottle Container example code.
* feat: add byte supports for DnnTensor
* [New Feature]Calculating Scales
* recursively update mask for container module
* add parallel in Blaswrapper

* refactor to support ssd

* meet pr comments

* fix logger serialize
* Improve Loss Function docs v2
* change asInstanceOf to toDistirbuted

* change asInstanceOf to toDistirbuted
* convert scale in blas to dnn

* meet pr comment
1. Because the new data type, we should add a new attribute called dataType
    to the `MemoryData`.
2. Because we should transfer the scales between FP32->int8 and Int8->FP32.
    we should add two new attributes called `mask` and `scales`.
*  fix accuracy for saved model

* exclude mkldnn model when conversion
Enable the int8 data type in layers, especially for convolutions.
So for a specific layer, it can accept a int8 input. If you want to the fp32
output, should add a reorder.
jenniew and others added 27 commits January 20, 2021 17:43
* remove DLFrames

* update

* update

* update

* rm dlframe example from test script
* Add Utest about dividing zero

* add Utest and zero check of LocalData

* add Utest and zero check of LocalData

* change

* Add Utest about dividing zero

* fix test
* add python3 to Dockerfile

* update

* update jdk

* update
* make default DistriOptimizer as V2

* update
* DistriOptimizerV2 logger

* update

* fix style check

* validate epoch num
* flip0.14

* update
@liu-shaojun liu-shaojun deleted the branch Deliadd:master March 6, 2024 09:23
@liu-shaojun liu-shaojun deleted the master branch March 6, 2024 09:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.