Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Commit

Permalink
Cyphers/27catchup (#3978)
Browse files Browse the repository at this point in the history
* Fix broadcast v1 reference (#3880)

* Added reproducer for issue with broadcast v1

* Make reference broadcast work with V1 broadcast

* Deprecate runtime::Tensor::copy_from

* Force Gelu decompose on CPU (#3887)

* Round the right bit with denorms (#3885)

* Round the right bit with denorms

* Rounding to inf

* Attribute visitor (#3579)

* Sketch of attribute walker

* Review comments

* merge error?

* Remove unused method

* simplify, make some ser tests work

* Don't look for keys that aren't there

* Factory registry, more ops visited, generic ser/dser start

* More merge

* cleanup

* Adapter for enums

* Compiler error

* Test of user-defined op

* Simplify enum name pairing

* Update distributed.hpp

* Review comments

* compiler error

* Direct access to non-primitive types from adapters

* Define and export type info

* attr enums, AvgPool*, vectors

* Cleanup

* some comments

* Allow type info to be used as a key.

* Don't leave output serialization shapes set.

* Auto adapter

* More ops, adapters

* Missing symbol

* Remove PartialShape and element::Type methods from visitor

* Fix type info

* Remove unused variable

* Simplify

* namespace error

* exports

* Uniform names

* Some better names

* More name cleanup, simplify visitor implementation

* Fix template, add test

* Revert serializer

* Add instantiations

* Work-around gcc issue

* VS exports

* VS exports

* windows export

* vs

* vs

* vs

* vs

* Simplify

* vs

* vs

* Add some missing attributes

* Missing factories

* Merge error

* Fix Add factories

* Missed type

* [FUSED] Add new LogSoftmax fused op (#3867)

* LogSoftmax introduced

* Added LogSoftmax to serializer

* Fixed style

* Fixed CmakeLists style

* code review remarks introduced

* Code review remarks introduced

* [ONNX] Importer should use fused op for MatMul (#3842)

* [ONNX] Importer should use fused op for MatMul

* Fix a bug in fused matmul op

* Dont reshape matmul inputs to at least 2D any more

* [SPEC] Add auto_broadcast parameter to SquaredDifference (#3856)

* [SPEC] Add auto_broadcast parameter to SquaredDifference

* Rename set_autobroadcast->set_autob

* [Spec][FusedOp]Adjust SpaceToDepth fused op to specification (#3862)

* Added support mode for SpaceToDepth

* Added unit tests

* Fixed styles

* Revert changes in prototxt files

* Force AutoBroadcast defaults (#3878)

* Force AutoBroadcast to be specified at the op level since no default is correct for all ops.

* exports

* Added constant folding for binary ops (#3895)

* Modify Gather constant folding to support v1 op.

* Address PR feedback.

* Update fused ops groupconvolution, gelu and layernorm to be dynamic friendly (#3876)

* set output et

* set output et

* overwrote validate and infer

* Add full path to gtest for build via ninja (#3882)

* [FUSED] Add reciprocal op (#3851)

* [FUSED] Add reciprocal op

* Review Fix #1

* Move operator op::v1 -> op

* Fix serializer

* Review Fix I

* [SPEC] Add new v1::FloorMod operator (#3852)

* [SPEC] Add new v1::FloorMod operator

* Review Fix I

* [MLIR] Fix MLIR build on mac OS (#3896)

* Fix MLIR build on mac OS

* Style

* Style

* [MLIR] Bump MLIR commit to c61db4bb (#3879)

* WIP

* WIP

* WIP

* WIP

* style

* WIP

* WIP

* Add err msg

* Fix headers and cleanup

* Bug Fix: incorrect shape validation logic. (#3897)

* Allow for overriding functions in visualization (#3900)

* Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899) (#3910)

* Add ReplaceSlice to ZeroDimTensorEliminiation pass

* style

* Default constructor needs to init autob (#3913)

* Implementation of CrossEntropy and CrossEntropyBackprop as fused Op's (#3818)

* - Implementaion of CrossEntropy and CrossEntropyBackprop as fused Op's

* - unit test case for CE fprop
- fix bug in decompose_op

* WIP debug PDPD unit test failure

* fixed broadcasting issue

* -fix bdcast issue for multi dim tensor

* utilities to restore the original tensor shape

* i) style-fix ii) rename variables

* - unit test for multiple dimensions ii) refactor create_mask to seperate function

* - fixed unit tests

* fix style

* set output element type to dynamic in pre_validate and infer shape

* disable ce with one hot unit test on PlaidML

* add CE op to fused_op_tbl

* - add serialzier support for CE and CE Backprop

* Update ToC to better match docplan spreadsheet (#3846)

* New ToC

* Working on docplan

* Clean up for toc

* Link to existing APIs on quantization doc

* Better align topics with docplan ToC; add section for dyn shapes

* Title casing to be consistent

* PR reviews

* New build preview

* Add default opset version, new versioning schema

* Remove duplicate file causing doc build warning

* Fix CSS rendering issues (#3921)

* Fix for the bug with as_type_ptr for TensorIterator::Input/Ouput desc (#3906)

* Updated unit test to reproduce a bug

* Code style

* Add exports

* Added missed export

* Bug fix in conv v1 shape inference (#3912)

* [SPEC] Add new v1::VariadicSplit operator (#3868)

* [SPEC] Add new v1::VariadicSplit operator

* Add missing namespace, fix a typo in doc

* Apply suggestions from code review

Co-Authored-By: Michał Karzyński <[email protected]>

* Style fix

* Set all of the inputs to be relevant to output shape

* Set output type if numer of outputs is known

* Add node validation for known input

* Fix for windows ninja (#3917)

* Fix for windows ninja

* Fix for centos build

* Remove fix for centosa

* Update ONNX importer to use v1 version of Softmax (#3894)

* Added downgrade pass for Softmax.

* Updated Softmax op to v1.

* Created vector with a right capacity.

* Include numeric header to enable std::iota function

* Removed unused numeric header from the old file

* Fix includes style

* Fix shape inference of TensorIterator body (#3922)

* fix for shape inference of tensor iterator body

* updated unit test for case end = -2

* indexes in unit tests

* Updated formula for num_iterations

* resolve compiler warning (#3923)

* Added u1 precision for binary weights (#3914)

* Added U1 precision for binary weights

* Handle switch cases with u1 type

* Fixed code style

* Added convert_to_string support for u1 type

* Use real C type  for u1 type.

Co-Authored-By: Robert Kimball <[email protected]>

* Fused_op: BatchMatMulTranspose (#3871)

* Initial commit

* Add decompose_op and unit-test

* Style fix

* Fix CI error

* Address review comments

* Remove CPUBatchFusion

* Address review feedback

* Address review feedback

* Added type_prop tests

* Moved 1 test from cpu to core to keep together

* Address PR comments

* Fix style

* Change repositories addresses to use SSH (#3889)

* Move CPU only unit tests to the cpu test file (#3919)

* Cyphers/uop (#3903)

* Address op_tbl issues

* fix

* fix

* fix

* Cleanup

* cleanup

* cleanup

* More fixes

* Revert ser changes

* Compiles

* opset conversion fixed

* Fix opset conversion tests

* Deal with Reciprocal and FloorMod movement

* Cleanup

* Remove duplicate enums

* Experiment

* experiment

* Types

* Reorg around clang 3.9 bug

* Add default constructor to some ops missing them (#3924)

* [SPEC] HardSigmoid adjustments (#3857)

* Construct HardSigmoid with alpha and beta as inputs

* Switch to the new HardSigmoid constructor entirely

* Broadcast with numpy style in hard sigmoid

* Python bindings adjustment to the new constructor

* Different way of creating constants

* Accept scalars instead of 1D vectors for alpha and beta

* Adjust the python tests to the new HardSigmoid constructor

* Use v1 ops in fused HardSigmoid

* Relax the static shape requirement for alpha and beta

* Fix merge

* CropAndResize op (#3893) (#3925)

* Stub for CropAndResize

* Cut and pasteo

* Need a cast

* Put all the op header includes in one header file, ops.hpp (#3929)

* Put all the op header includes in one header file, ops.hpp

* Update ops.hpp

* Fix compilation issues for default constructors (#3928)

* Make Node's type_info mandatory (#3891)

* Make Node's type_info mandatory

* Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899)

* Add ReplaceSlice to ZeroDimTensorEliminiation pass

* style

* Force Gelu decompose on CPU (#3902)

* Copy rt info (#3934)

* Matmul float type test case for UEP (#3877)

* Matmul float type test case for UEP

Signed-off-by: suryasidd <[email protected]>

* Removed microsoft ops domains and ran clang-format

Signed-off-by: suryasidd <[email protected]>

* [SPEC] Add OneHot:v1 (#3884)

* Moved OneHot to v0

* Introduced OneHot:v1

* Added shape calculation for OneHot:v1

* Added element types checking

* Added output shape tests

* Added tests to checking if inputs are scalars

* Updated OneHot:v1 doc

* Implemented OneHot:v1 downgrade pass

* Using OneHot:v1 in onnx_importer

* Implemented OneHot:v0 upgrade

* Fixed OneHot onnx_importer

* Refactored normalize_axis

* Added OneHot:v1 serialized

* Code review remarks introduced

* Added doc to normalize_axis

* Enable pipelining in CPU Backend (#3916)

* Enable pipelining in CPU Backend

* Applying clang-formatting to my previous commit

* Changing CPU backend test. executable_can_create_tensor will now return true

* [SPEC] Add support string as AutoBroadcastSpec (#3909)

* Support string casting to AutoBroadcastSpec

* Make string values consistent

* Adding default ctor for Constant (#3938)

* Adding default ctor

* Address PR feedback

*  Cumulative Sum  (#3873)

* - Op defination for cummalative sum

* WIP reference kernel for cummulative sum

* - unit test case for default cum_sum
- addition ctor for cumsum to accept axis as a integer insted of Node
type
- style fix

* - add serializer support
- fix failing unit test case
- update Op in the interpreter dispatcher

* - CPU builder and DEX support for CumSum

* - implemented mapping tensor elements to corrosponding axis

* - unit test for multiple dims
- fix axis in the op defination
- support for reference kernel to compute across all axis

* - added support for exclusive and reverse modes
- more unit test case for all modes

* - codegen support for CumSum
- disable CumSum unit test for PlaidML

* -Add missing header to  codegen stream writer

* fixed codegen writer

* change return type of exclusive and reverse to bool

* - support for dynamic shape
- support to handle all tensor types in CPU builder

* - add support for interpreter to handle different axis types

* Style fix

* Fix incorrect uses of `description()` (#3946)

* Fix incorrect uses of `description()`

* type-o/namespace

* Move non-primitive attribute adapters to adaptee's files (#3949)

* Move non-primitive attribute adapters to adaptee's files

* Cast in copy

* Update ONNX importer Gemm to produce MatMul op (#3927)

* Update ONNX importer Gemm to produce MatMul op

* Address opset3 bug

* [SPEC][FusedOp] Add Mod operator (#3908)

* Mod operator introduced

* Introduced onnx importer, fixed implementation

* styles applied

* Refactored assert comment for mod

* Add failure mod test to plaidml manifest

* Code review remarks introduced

* Changed ops used in decompose to v1

* Moved Mod to op_v1_tbl

* Partially fixed visibility for symbols (Ops, Nodes, Transformations, Matchers) (#3767)

* Partially fixed visibility for symbols:

* Resolved issues with RTTI and AppleClang

* style

* review fixes

* fixed compilation with msvc 2019

* Export extra API which is used in other public classes

* CMAKE: MSVS -> MSVC

* Fixed template export

* Fixed compilation flags

* Fixed default args

* removed self-inclusion

* export

* shape

* export strides

* Export all symbols needed for OpenVINO

* Export

* disable cpu

* AxisSet

* disable warning

* fix

* removed second declaration

* fixed runtime exports

* Reverted some changes

* Fixed LNK2005 error on Windows

* Fixed code style check

* Fixed EnumAttributeAdapterBase

* Remove export of template classes

* Fixed code style for EnumAttributeAdapterBase

* Fixed for protobuf

* Test cleanups (#3942)

* Documentation for Dynamic Shapes and additional graph construction options (#3930)

* Initial dynamic shapes doc

* Basics on dynamic shapes, with example code

* Add glossary defs and dynamic shapes example

* Slightly better organization

* Address make style check failure, maybe

* Test dynamic shapes doc w 0.27.0-rc.0+9aa81d9

* Resolve doc build error w new opset versioning

* Review comments addressed

* Add theme-relevant revised illustrations from collab_ngai

* style

* Style fixes

* Run make style-apply with clang-format-3.9

* [ONNX] Add CumSum to ONNX importer (#3918)

* Register CumSum operator in onnx importer

* Missing whitespace

* Update CMakeLists.txt

* ONNX importer - CumSum op init

* Simple CumSum onnx model

* ONNX CumSum model simple test

* Default axis

* Axis input test

* Inputs variable

* Style apply

* Test 3d exclusive reverse

* Apply style

* Add memory header and std namespace

* Add model_cum_sum tests to plsidml unit_test.manifest

* Add model_cum_sum tests to plaidml unit_test.manifest

* Changed default axis type

* Test model update

* Style apply

* Add test for dynamic axis input

* [MLIR] Fused Ops dialect declaration (#3860)

* WIP

* WIP

* WIP

* All ops

* Fix layernorm backprop op name

* WIP: Adding tests

* WIP: Adding LIT parsing/printing tests

* WIP

* Added LSTM cells. Fixed some ops

* All builder tests

* PR fixes

* Fix spacing. Add missing setter to SpaceToDepth

* Update spaceToDepth lit test

* PR fixes

* Build fix

* Another fix

* Fixed optional args

* [MLIR] Enable ViewOp in Affine Lowerer (#3911)

* Map each ng tensor to a linear buffer and a view

* fix comment

* Create views only when a value is assigned a buffer id

* style

* Fix lit test

* ConstantFolding for v1::StridedSlice operation (#3955)

* constant folding for strided slice

* code style

* Refactoring

* fix for warning: deleting an unused variable

* Opset1 Definition (#3813)

* Opset1

* Added opset1.hpp

* Added more ops to opset0 and opset1

* Move opset1.hpp up and remove opset0.hpp

* Add versioning to more ops

* Revert to older pass names to keep compatibility for external components

* Fix compilation errors with codegen

* merge

* Added compile-time check for opset

* Added opset1 tbl

* Add op_version table of all ops

* Create factories from op_version_tbl

* reorg unsupported ops in int backend

* Added temporary alias for GreaterEqual

* Add missing case to interpreter enumeration

* Finish opset serializer cleanup (#3939)

* Opset-based opset conversion (#3937)

* Opset-based opset conversion

* Add other opset conversion

* Use ops.hpp

* Update opset0_tbl.hpp

* Switch interpreter to opset0 + a few extras (#3941)

* Switch interpreter, gcpu to opset0

* Remove unnused files

* Give interpreter its own opset

* style

* Fix namespace

* Fix rounding type conversion

* Work-around for bad clang3.9 bug

* Work-around

* [SPEC] Add negative axes support for ReverseSequence (#3926)

* Added negative axes support for ReverseRequence

* code review remarks introduced

* Disable reverse sequence for PlaidMl tests

* Fixed styles

* Fixed axes assignment

* Fixed normalized axes assignment

* [SPEC] Adjust ConvolutionBackpropData op. (#3935)

* [SPEC] Adjust ConvolutionBackpropData op.

```
inputs:
  1. filters-------+
  2. output_delta  |  -> 1. data
                   +---> 2. filters
  3. data_batch_shape -> 3. output_shape(+optional)

attributes:
  1. strides          -> 1. strides
  2. dilations-----+
  3. pads_begin    |  -> 2. pads_begin
  4. pads_end      |  -> 3. pads_end
                   +---> 4. dilations
		      -> 5. +auto_pad(optional)[PadType::EXPLICIT]
		      -> 6. +output_padding(optional)[zeros]
```

* Review fix I

* [SPEC] ConvertLike op (#3944)

* [Spec] Add 3-input constructor to DetectionOutput (#3966)

* Add 3-input constructor to DetectionOutput

* Review comments

* v1::Reshape zero_flag renamed. Default value unset (#3945)

* Add groupconvolution bprop (#3940)

* add placeholder for conv bprop

* add constructor, api, serializer and can compile

* implement decompose_op

* fix arg num

* fix and update

* address comment, clean up and add ut placeholder

* update ut

* address comment on groups

* Added explicit dependencies between buildable target and external project (#3962)

*  Relax check on LRN for rank requirement to be >=3 (#3952)

*  relax check for LRN for requirement rank should be >=3

* rename unit test names

* - Disable lrn unit test with axes for CPU backend

* remove outdated unit test on rank requirement from type_prop

* - disable newly added lrn unit test in plaidMl

* [SPEC] ReduceLogicalAnd & ReduceLogicalOr (#3874)

* ReduceLogicalAnd op implementation

* ReduceLogicalOr op implementation

* Add basic constant folding support

* Fix typo

* Revert "Add basic constant folding support"

This reverts commit 5d14a18.

* Introduce and use a new base class for logical reductions

* Constant folding for v1::ReduceLogicalAnd

* Constant folding for v1::ReduceLogicalOr

* Obsolete cout removal

* [SPEC] Adjust Split (#3943)

* Changed axis to Node

* Added using normalize from validation util

* refactored split

* Added typrop tests to Split

* Added set_input_is_relevant_to_shape for Split

* clang style applied

* Fixed var name

* Code refactor

* mergre from master. part.2

* Constructor to provide CI compatibility

* CI compatibility

* CI compatibility

* Updated get_outputs

* CI compitability

* Fixed get_outputs function

* [SPEC] Add DeformablePSROIPooling v1 (#3954)

* Initial commit

* Moved DeformablePSROIPooling to v1

* Moved DeformablePSROIPooling to v1. Part.2

* Added missing fields

* Added inferance shape

* Added type prop UT

* Added serialization

* Doc + styles applied

* Revert incorrect changes

* Revert incorrect changes. Part.2

* Moved to NGRAPH_API

* integration with master

* Code review remarks introduced

* DeformablePSROIPooling updated to new spec

* Add v1 version of Subtract with Numpy broadcasting as default (#3957)

* V1 version of Subtract with default Numpy autobcast

* Update op_v1_tbl.hpp with v1 version of Subtract

* Use v1 of Subtract in ONNX importer

* Add v1 namespace

* Update namspece

* Missing punctuation

* Add Subtract to opset0 downgrade

* Add Subtract to opset1 upgrade

* Add Subtract header to cpu emmiter

* Update serializer

* Add Subtract to opset_pass tests

* Use downgrade method

* Add get_version method

* Style apply

* Add v1 Substract to check opset1

* Add NGRAPH_API before class name

* Removed get_version method

* Separate cases for Subtract and Subtract_v1 in serializer

* Update op_version_tbl with v1 Subtract

* NUMPY autobcast for no args constructor

* Add Subtract_v1 to serializer

* [SPEC] Add constant folding for LogicalNot:v1 (#3961)

* Added consant folding for LogicalNot

* Fixed alphabetical order

* Update the tolerance on auto_broadcast_test (#3959)

* Copy RT info for parameters (#3969)

* [SPEC] Add GatherTree:v1 (#3967)

* GatherTree introduced

* Added GatherTree type_prop tests
  • Loading branch information
diyessi authored Dec 3, 2019
1 parent f854fd6 commit 8235c2c
Show file tree
Hide file tree
Showing 583 changed files with 20,359 additions and 8,536 deletions.
8 changes: 4 additions & 4 deletions .ci/onnx/jenkins/Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -113,10 +113,10 @@ pipeline {
WORKDIR = "${WORKSPACE}/${BUILD_NUMBER}"
JENKINS_GITHUB_CREDENTIAL_ID = "7157091e-bc04-42f0-99fd-dc4da2922a55"
CI_DIR = "ngraph-onnx/.ci/jenkins"
NGRAPH_ONNX_REPO_ADDRESS = "https://github.com/NervanaSystems/ngraph-onnx.git"
NGRAPH_REPO_ADDRESS = "https://github.com/NervanaSystems/ngraph.git"
NGRAPH_ONNX_BRANCH="${CHANGE_BRANCH}"
NGRAPH_BRANCH="${CHANGE_BRANCH}"
NGRAPH_ONNX_REPO_ADDRESS = "git@github.com:NervanaSystems/ngraph-onnx.git"
NGRAPH_REPO_ADDRESS = "git@github.com:NervanaSystems/ngraph.git"
NGRAPH_ONNX_BRANCH = "${CHANGE_BRANCH}"
NGRAPH_BRANCH = "${CHANGE_BRANCH}"
}
options {
skipDefaultCheckout true
Expand Down
47 changes: 24 additions & 23 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -69,14 +69,6 @@ if (UNIX AND NOT APPLE)
set(LINUX TRUE)
endif()

if ("${CMAKE_GENERATOR}" MATCHES "^Visual Studio.*$")
set(MSVS TRUE)
endif()

if (CMAKE_CXX_COMPILER_ID MATCHES MSVC)
add_definitions(/bigobj)
endif()

# APPLE: Set CMAKE_OSX_SYSROOT if not set already.
if (APPLE)
execute_process(COMMAND sw_vers -productVersion
Expand Down Expand Up @@ -151,7 +143,7 @@ if (CMAKE_OSX_SYSROOT)
)
endif()

if (NOT MSVS)
if (NOT MSVC)
if(NOT CMAKE_BUILD_TYPE)
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "Build type" FORCE)
endif()
Expand Down Expand Up @@ -361,7 +353,16 @@ if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "^(Apple)?Clang$" AND NOT NGRAPH_BUILD_DI
endif()

if (WIN32)
set(CMAKE_CXX_FLAGS "/W0 /EHsc /MP")
string(REPLACE "/W3" "/W0" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc /MP")

if (CMAKE_CXX_COMPILER_ID MATCHES MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /bigobj")
# C4251 needs to have dll-interface to be used by clients of class
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4251")
# C4275 non dll-interface class used as base for dll-interface class
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4275")
endif()
else()
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-g")
set(CMAKE_CXX_FLAGS_DEBUG "-O0 -g")
Expand Down Expand Up @@ -461,36 +462,36 @@ if (NOT DEFINED NGRAPH_TBB_ENABLE)
set(NGRAPH_TBB_ENABLE ${NGRAPH_CPU_ENABLE})
endif()

# Since UNIX and APPLE support Bash we can use a Bash script to do the clang-format functions
# Since UNIX support Bash we can use a Bash script to do the clang-format functions
# This is much faster than the cmake method
if (UNIX OR APPLE)
if (UNIX)
add_custom_target(style-check COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/maint/check-code-format.sh)
add_custom_target(style-apply COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/maint/apply-code-format.sh)
add_custom_target(style COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/maint/apply-code-format.sh)
else()
add_custom_target(style-check
COMMAND ${CMAKE_COMMAND}
-DNGRAPH_SOURCE_DIR="${CMAKE_SOURCE_DIR}"
-P ${CMAKE_MODULE_PATH}style_check.cmake
-DNGRAPH_SOURCE_DIR="${CMAKE_CURRENT_SOURCE_DIR}"
-P ${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/style_check.cmake
)

add_custom_target(style-apply
COMMAND ${CMAKE_COMMAND}
-DNGRAPH_SOURCE_DIR="${CMAKE_SOURCE_DIR}"
-P ${CMAKE_MODULE_PATH}style_apply.cmake
-DNGRAPH_SOURCE_DIR="${CMAKE_CURRENT_SOURCE_DIR}"
-P ${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/style_apply.cmake
)

add_custom_target(style
COMMAND ${CMAKE_COMMAND}
-DNGRAPH_SOURCE_DIR="${CMAKE_SOURCE_DIR}"
-P ${CMAKE_MODULE_PATH}style_apply.cmake
-DNGRAPH_SOURCE_DIR="${CMAKE_CURRENT_SOURCE_DIR}"
-P ${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/style_apply.cmake
)
endif()

add_custom_target(fix-mode
COMMAND ${CMAKE_COMMAND}
-DNGRAPH_SOURCE_DIR="${CMAKE_SOURCE_DIR}"
-P ${CMAKE_MODULE_PATH}fix_mode.cmake
-DNGRAPH_SOURCE_DIR="${CMAKE_CURRENT_SOURCE_DIR}"
-P ${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/fix_mode.cmake
)

#-----------------------------------------------------------------------------------------------
Expand Down Expand Up @@ -545,13 +546,13 @@ if(NOT DEFINED EXTERNAL_PROJECTS_ROOT)
endif()

if (NGRAPH_ONNX_IMPORT_ENABLE)
if (MSVS)
if (MSVC)
# When we build dll libraries. These flags make sure onnx and protobuf build with /MD, not /MT.
# These two options can't be mixed, because they requires link two imcompatiable runtime.
set(ONNX_USE_MSVC_STATIC_RUNTIME OFF)
set(protobuf_WITH_ZLIB OFF CACHE BOOL "" FORCE)
set(protobuf_MSVC_STATIC_RUNTIME OFF CACHE BOOL "Link protobuf to static runtime libraries" FORCE)
endif(MSVS)
endif()
if (NOT NGRAPH_USE_SYSTEM_PROTOBUF)
include(cmake/external_protobuf.cmake)
else()
Expand Down Expand Up @@ -591,7 +592,7 @@ endif()
if(NGRAPH_CODEGEN_ENABLE)
if (NGRAPH_USE_PREBUILT_LLVM OR DEFINED LLVM_TARBALL_URL)
include(cmake/external_llvm_prebuilt.cmake)
elseif (NOT MSVS)
elseif (NOT MSVC)
include(cmake/external_llvm.cmake)
else()
message(FATAL_ERROR "CODEGEN is not supported on Windows!")
Expand Down
3 changes: 2 additions & 1 deletion cmake/Modules/fix_mode.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@

function(MODE_APPLY_FILE PATH)
execute_process(COMMAND git update-index --add --chmod=-x ${PATH}
OUTPUT_VARIABLE RESULT)
OUTPUT_VARIABLE RESULT
ERROR_QUIET)
endfunction()

set(DIRECTORIES_OF_INTEREST
Expand Down
18 changes: 18 additions & 0 deletions cmake/external_gtest.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,12 @@ if(WIN32)
)
endif()

if(CMAKE_BUILD_TYPE)
list(APPEND GTEST_CMAKE_ARGS
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
)
endif()

if(UNIX)
# workaround for compile error
# related: https://github.com/intel/mkl-dnn/issues/55
Expand All @@ -43,6 +49,17 @@ else()
set(GTEST_CXX_FLAGS ${CMAKE_CXX_FLAGS})
endif()

#Build for ninja
if(UNIX)
SET(GTEST_PATHS ${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/gtest/libgtest.a
${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/libgmock.a)
else()
SET(GTEST_PATHS ${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/gtest/gtest.lib
${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/gtest/gmock.lib
${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/gtest/gtestd.lib
${CMAKE_BINARY_DIR}/ngraph/gtest/build/googlemock/gtest/gmockd.lib)
endif()

ExternalProject_Add(
ext_gtest
PREFIX gtest
Expand All @@ -60,6 +77,7 @@ ExternalProject_Add(
${GTEST_CMAKE_ARGS}
BINARY_DIR "${EXTERNAL_PROJECTS_ROOT}/gtest/build"
EXCLUDE_FROM_ALL TRUE
BUILD_BYPRODUCTS ${GTEST_PATHS}
)

#------------------------------------------------------------------------------
Expand Down
4 changes: 2 additions & 2 deletions cmake/external_mlir.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ set(MLIR_LLVM_REPO_URL https://github.com/llvm/llvm-project.git)
set(MLIR_REPO_URL https://github.com/tensorflow/mlir.git)

# Change these commit IDs to move to latest stable versions
set(MLIR_LLVM_COMMIT_ID 0845ac7331e)
set(MLIR_COMMIT_ID 1f7893e0)
set(MLIR_LLVM_COMMIT_ID e0f1d9d8729)
set(MLIR_COMMIT_ID c61db4bb)

# MLIR environment variables. Some of them are used by LIT tool.
set(MLIR_PROJECT_ROOT ${CMAKE_CURRENT_BINARY_DIR}/mlir_project)
Expand Down
39 changes: 31 additions & 8 deletions cmake/external_protobuf.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,12 @@ include(ExternalProject)

# This version of PROTOBUF is required by Microsoft ONNX Runtime.
set(NGRAPH_PROTOBUF_GIT_REPO_URL "https://github.com/protocolbuffers/protobuf")
set(NGRAPH_PROTOBUF_GIT_TAG "v3.5.2")

if(NGRAPH_ONNX_IMPORT_ENABLE)
set(NGRAPH_PROTOBUF_GIT_TAG "v3.5.2")
else()
set(NGRAPH_PROTOBUF_GIT_TAG "v3.6.1")
endif()

if (WIN32)
ExternalProject_Add(
Expand Down Expand Up @@ -103,14 +108,32 @@ if (WIN32)
else()
set(Protobuf_LIBRARY ${Protobuf_INSTALL_PREFIX}/lib/libprotobuf.a)
endif()
set(Protobuf_LIBRARIES ${Protobuf_LIBRARY})

if (NOT TARGET protobuf::libprotobuf)
add_library(protobuf::libprotobuf UNKNOWN IMPORTED)
set_target_properties(protobuf::libprotobuf PROPERTIES
INTERFACE_SYSTEM_INCLUDE_DIRECTORIES "${Protobuf_INCLUDE_DIR}"
IMPORTED_LOCATION "${Protobuf_LIBRARY}")
add_dependencies(protobuf::libprotobuf ext_protobuf)
if(NGRAPH_ONNX_IMPORT_ENABLE)
if (NOT TARGET libprotobuf)
add_library(libprotobuf INTERFACE)
if (WIN32)
target_link_libraries(libprotobuf INTERFACE
debug ${Protobuf_INSTALL_PREFIX}/lib/libprotobufd.lib
optimized ${Protobuf_INSTALL_PREFIX}/lib/libprotobuf.lib)
else()
target_link_libraries(libprotobuf INTERFACE
${Protobuf_INSTALL_PREFIX}/lib/libprotobuf.a)
endif()
set_target_properties(libprotobuf PROPERTIES
INTERFACE_INCLUDE_DIRECTORIES "${Protobuf_INCLUDE_DIR}")
add_dependencies(libprotobuf ext_protobuf)
endif()
set(Protobuf_LIBRARIES libprotobuf)
else()
if (NOT TARGET protobuf::libprotobuf)
add_library(protobuf::libprotobuf UNKNOWN IMPORTED)
set_target_properties(protobuf::libprotobuf PROPERTIES
INTERFACE_SYSTEM_INCLUDE_DIRECTORIES "${Protobuf_INCLUDE_DIR}"
IMPORTED_LOCATION "${Protobuf_LIBRARY}")
add_dependencies(protobuf::libprotobuf ext_protobuf)
endif()
set(Protobuf_LIBRARIES protobuf::libprotobuf)
endif()

if (NOT TARGET protobuf::protoc)
Expand Down
68 changes: 68 additions & 0 deletions doc/examples/dynamic_tensor/partial_shape.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
//*****************************************************************************
// Copyright 2017-2019 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//*****************************************************************************

#include <iostream>

#include <ngraph/ngraph.hpp>

using namespace ngraph;

int main()
{
// Create and compile a graph where the provided info of shape of x is
// (2,?)
auto x_shape_info = PartialShape{2, Dimension::dynamic()};
auto x = make_shared<op::Parameter>(element::i32, x_shape_info);
auto a = x + x;
auto f = make_shared<Function>({a}, {x});
auto be = runtime::backend::create();
auto ex = be->compile(f);

// Create a dynamic tensor of shape (2,?)
auto t_out = be->create_dynamic_tensor(element::i32, x_shape_info);

// Call the graph to write a value with shape (2,3) to t_out
auto t_in = be->create_tensor(element::i32, Shape{2, 3});
t_in->write();
ex->call({t_out}, {t_in})

// Call the graph again, to write a value with a different shape to
// t_out.
t_in = be->create_tensor(element::i32, Shape{2, 20});
t_in->write();
ex->call({t_out}, {t_in})

// Get the result. At this point t_out->get_shape() would return
// Shape{2,20},
// but t_out->get_partial_shape() would return "(2,?)"

float r[2][3];
t_result->read(&r, 0, sizeof(r));

std::cout << "[" << std::endl;
for (size_t i = 0; i < s[0]; ++i)
{
std::cout << " [";
for (size_t j = 0; j < s[1]; ++j)
{
std::cout << r[i][j] << ' ';
}
std::cout << ']' << std::endl;
}
std::cout << ']' << std::endl;

return 0;
}
17 changes: 9 additions & 8 deletions doc/sphinx/ngraph_theme/layout.html
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,9 @@
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
{% endif %}

<link href="https://fonts.googleapis.com/css?family=Nunito:300,300i,400&display=swap&subset=latin-ext" rel="stylesheet">
<!-- <link href="https://fonts.googleapis.com/css?family=Nunito:300,300i,400&display=swap&subset=latin-ext" rel="stylesheet"> -->
<link href="https://fonts.googleapis.com/css?family=Nunito+Sans:300,400,600,700,800,900" rel="stylesheet">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">

{% for cssfile in css_files %}
<link rel="stylesheet" href="{{ pathto(cssfile, 1) }}" type="text/css" />
Expand Down Expand Up @@ -84,14 +86,13 @@

<body>
<div id="menu-float" class="menu-float">
<a href="https://www.ngraph.ai" target="_blank">Home</a>
<a href="https://www.youtube.com/embed/C9S0nmNS8bQ" target="_blank">Video</a>
<a href="https://www.ngraph.ai/ecosystem" target="_blank">Ecosystem</a>
<a href="https://ngraph.nervanasys.com/docs/latest">Docs</a>
<a href="https://www.ngraph.ai/tutorials">Tutorials</a>
<a href="https://ngraph.slack.com/"><img src="https://cdn.brandfolder.io/5H442O3W/as/pl546j-7le8zk-5h439l/Slack_Mark_Monochrome_White.png?width=35&height=35"></a>
<a href="https://www.ngraph.ai" target="_blank"><i class="fa fa-home"></i></a>
<a href="https://ngraph.nervanasys.com/docs/latest" title="Documentation Home"><i class="fa fa-book"></i></a>
<a href="https://www.ngraph.ai/tutorials" title="Tutorials"><i class="fa fa-user-circle"></i></a>
<a href="https://www.youtube.com/embed/C9S0nmNS8bQ" target="_blank"><i class="fa fa-video-camera"></i></a>
<a href="https://ngraph.slack.com/" title="nGraph Slack Channel"><i class="fa fa-slack"></i></a>
<a href="https://github.com/NervanaSystems/ngraph/blob/master/LICENSE"><img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg"></a>
<a href="https://www.github.com/NervanaSystems/ngraph"><img src="https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master"></a>&nbsp;&nbsp;&nbsp;</div></body>
<a href="https://www.github.com/NervanaSystems/ngraph"><img src="https://travis-ci.org/NervanaSystems/ngraph.svg?branch=master"></a></div></body>


<body class="wy-body-for-nav" role="document">
Expand Down
9 changes: 4 additions & 5 deletions doc/sphinx/ngraph_theme/ngversions.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,15 @@
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="docvs">nGraph Compiler stack</span>
v: {{ version }}
<span class=""></span>
<span></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>{{ _('Recent Versions') }}</dt>
<dt>{{ _('Recent Versions') }}<i class="fa fa-terminal"></i></dt>
<dd><!-- Until our https://docs.ngraph.ai/ publishing is set up, we link to GitHub -->
<ul>
<!-- <li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26">0.26</a></li> -->
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0-rc.3">Prerelease 0.26</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.1-rc.4">0.25.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.26.0">0.26.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.1-rc.10">0.25.1</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.25.0">0.25.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.24.0">0.24.0</a></li>
<li><a href="https://github.com/NervanaSystems/ngraph/releases/tag/v0.22.2-rc.0">0.22.2</a></li>
Expand Down
Loading

0 comments on commit 8235c2c

Please sign in to comment.