Commits


Jiajia Qin authored and GitHub committed 0409c639f77
[js/webgpu] Optimize MultiHeadAttention|Transpose (#22420) ### Description <!-- Describe your changes. --> With this optimization, 96 MultiHeadAttention|Transpose ops in phi3 disappear. Phi3 becomes 113 tokens from 107 tokens on my dGPUs. The optimization mainly skips the transpose op if one of the transposed dims is 1. Reshape is enough.