Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add HSTU ragged attention operator #2453

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

xuzhao9
Copy link
Contributor

@xuzhao9 xuzhao9 commented Sep 11, 2024

As the title says.

On H100:

$ python run_benchmark.py triton --op ragged_attention

            x_val    hstu_triton_ragged_attention-latency    hstu_triton_ragged_attention_persistent-latency
-----------------  --------------------------------------  -------------------------------------------------
(8, 4, 512, 2048)                               0.0141706                                          0.0128713
(8, 4, 512, 2048)                               0.0187315                                          0.0171204
(8, 4, 512, 2048)                               0.0156807                                          0.0155399
(8, 4, 512, 2048)                               0.0165724                                          0.0154679
(8, 4, 512, 2048)                               0.0163886                                          0.0157738
(8, 4, 512, 2048)                               0.0173378                                          0.0155991
(8, 4, 512, 2048)                               0.0164874                                          0.0153128
(8, 4, 512, 2048)                               0.0203275                                          0.0172193
(8, 4, 512, 2048)                               0.0214526                                          0.0185414
(8, 4, 512, 2048)                               0.0172307                                          0.0169625

@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Summary:
As the title says.

On H100:
```
$ python run_benchmark.py triton --op ragged_attention

            x_val    hstu_triton_ragged_attention-latency    hstu_triton_ragged_attention_persistent-latency
-----------------  --------------------------------------  -------------------------------------------------
(8, 4, 512, 2048)                               0.0141706                                          0.0128713
(8, 4, 512, 2048)                               0.0187315                                          0.0171204
(8, 4, 512, 2048)                               0.0156807                                          0.0155399
(8, 4, 512, 2048)                               0.0165724                                          0.0154679
(8, 4, 512, 2048)                               0.0163886                                          0.0157738
(8, 4, 512, 2048)                               0.0173378                                          0.0155991
(8, 4, 512, 2048)                               0.0164874                                          0.0153128
(8, 4, 512, 2048)                               0.0203275                                          0.0172193
(8, 4, 512, 2048)                               0.0214526                                          0.0185414
(8, 4, 512, 2048)                               0.0172307                                          0.0169625
```


Differential Revision: D62513596

Pulled By: xuzhao9
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D62513596

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants