Commits


Yi Zhang authored and GitHub committed caa67439b52
Add more F16 kernels of XNNPack (#22381) ### Description 1. Add Gemm, MatMul, Softmax, AveragePool and Resize F16 kernels This PR has included all changes in #22378 [AB#51066](https://aiinfra.visualstudio.com/6a833879-cd9b-44a4-a9de-adc2d818f13c/_workitems/edit/51066) [AB#51026](https://aiinfra.visualstudio.com/6a833879-cd9b-44a4-a9de-adc2d818f13c/_workitems/edit/51026) 2. Matrix B must be const and martrix A and B dim_size shoule NOT bigger than 2 in XNNPack, so I added 2 tests in matmul_test.cc to make sure it's really tested. (that is, compute() must be called.) ### Motivation and Context