Commits


petermcaughan authored and GitHub committed e5189330d59
Address OOM Issue when exporting Whisper (#15880) ### Description Remove attention_mask from unnecessary code paths in the whisper export process. ### Motivation and Context Current export script frequently hits OOM error when export whisper-large. Memory profiling shows that this is a result of generating dummy inputs for the `encoder_attention_mask` input for a model pass during exporting - in whisper-large, this dummy tensor can be around 20GB in size. `encoder_attention_mask` is ultimately a dummy input - it's just there to satisfy certain BeamSearch requirements. Thus, we're currently creating a 20GB tensor and passing it to the model, which then discards the input anyways. By removing the code path to generate a dummy encoder_mask tensor, we can reduce the memory requirements to export whisper substantially, while keeping the BeamSearch checks satisfied. --------- Co-authored-by: Peter McAughan <petermca@microsoft.com>