Commits


Yufeng Li authored and GitHub committed c7ced7a5e9c
Add PackedAttention for packing mode (#14858) ### Description <!-- Describe your changes. --> Transformer models can handle batch of inputs at once. However, sequences in a batch usually have different length. Then we have to pad the short one to have same length as the longest. This is not efficient especially for large batch with high variance. This PR introduces a PackedAttention operator which can take in packed sequences (no padding) and also produces output in packing mode. There will be another PR to use the PackedAttention to implement the encoder in packing mode. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->