Commits


Sushanth Rajasankar authored and GitHub committed 271c509d592
DP4AMatMul perf refinements (#23539) In this change 1. Vectorization of k is updated to 4. 2. Tile_A, Tile_B are stored transposed in shared memory. This makes it so that memory locality is improved for our access pattern. 3. Lane output is switched to being individual vectors and its loop unrolled, this solves the problem where laneoutput was not on registers before. Perf improvements are not very consistent with this change. On Tigerlake GPU with 32.0.101.6460 (latest intel drivers) ``` Baseline model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000 Batch size: 1, prompt tokens: 1001, tokens to generate: 128 Prompt processing (time to first token): avg (us): 7.36557e+06 <<<< avg (tokens/s): 135.903 p50 (us): 7.35498e+06 stddev (us): 27599 n: 5 * 1001 token(s) With Change model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000 Batch size: 1, prompt tokens: 1001, tokens to generate: 128 Prompt processing (time to first token): avg (us): 6.52302e+06 <<<< avg (tokens/s): 153.457 p50 (us): 6.52224e+06 stddev (us): 10407.3 n: 5 * 1001 token(s) ``` However, using the Intel GPA comparing before and after profile, one can clearly see straight runs of ALU work without being interspersed by writebacks to local memory that contained lane_output before. 