Commits


George Nash authored and GitHub committed e695cd304a1
Dnnl refactor (#8627) * dnnl ep rework rework DnnlTensor,DnnlNode,DnnlSubgraph to support arbitrary graph topology and tensor data types rework GetCapability to claim nodes in graph greedily from node topological ordering and delay creation of DnnlSubgraph until Compile rework compile to have DnnlSubgraphPrimitive as the object to handle primitive creation and execution instead of thread local primitive pool which duplicates intermediate memory allocated by the EP across threads DnnlSubgraphPrimitive provides helpers to handle many common functions for each dnnl primitive builder and become the centralized place to store input, output, intermediate memories, initializer memories and etc it provides functions to obtain input memories with automatic reordering/reshaping and moving between engines it provides interfaces to add primitive, set output memory for single node and etc add CONCURRENT_EXEC compile flag for dnnl library as without it, convolution primitive cannot be created and executed on different threads enable unit tests to run on dnnl ep as well if built with dnnl ep add dnnl ep support for Matmulinteger * Add Relu to the DNNL refactor Signed-off-by: George Nash <george.nash@intel.com> * Add Convolution op to the DNNL rework Signed-off-by: George Nash <george.nash@intel.com> * Add Pooling ops to the DNNL rework This adds the following ops: - AveragePool - GlobalAveragePool - GlobalMaxPool - MaxPool Note: Pooling with dilation is not yet supported. Note: GlobalLpPool, LpPool, MaxRoiPool, and MaxUnpool are not supported yet. Signed-off-by: George Nash <george.nash@intel.com> * Add Sum op to the DNNL rework Signed-off-by: George Nash <george.nash@intel.com> * Add ConvGrad op to the DNNL rework Signed-off-by: George Nash <george.nash@intel.com> * Add MaxPoolGrad and AveragePoolGrad ops to DNNL rework Signed-off-by: George Nash <george.nash@intel.com> * Added lrn operator to the refactored code Signed-off by chethan.palangoutu.keshava@intel.com * Added ReduceMean DNNL op to the refactor code Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com> * Added Softmax DNNL op for the refactored code Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com> * Added BatchNorm DNNL op inference-only for refactored code Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com> * Added Binary Ops to DNNL rework Signed-off-by: Wang <zhaoyang.wang@intel.com> * Added ReluGrad to DNNL Rework Signed-off-by: Wang <zhaoyang.wang@intel.com> * Update OneDNN tag to v2.3 Signed-off-by: Wang <zhaoyang.wang@intel.com> * Added support for memory upto dim size 12 this is to fix the CI test cases that contain binary ops of input dim size > 5 Signed-off-by: Wang <zhaoyang.wang@intel.com> * Prevent claiming support for float16 and bfloat16 when only float is suppoted By using The string.find used was causing the code to claiming support for float16 and bfloat16 when we only supported float. We now explicitly check the code for the data type or the data type with a 7 letter prefix basically prefixed with "tensor(" Signed-off-by: George Nash <george.nash@intel.com> * Disable uint8 mul and div, improve type conversion Disable mul_uint8 and div_uint8 test cases as they use modulo for overflow handling while onednn uses saturation improve ype conversion using enum instead of string comparsion as well as adding more types Signed-off-by: Wang <zhaoyang.wang@intel.com> Co-authored-by: Wang <zhaoyang.wang@intel.com> Co-authored-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>