Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Compiler] Rename compiler ROCM target to HIP #18477

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

nithinsubbiah
Copy link
Collaborator

As a follow-up to #17710, this patch renames the compiler target backend from ROCM to HIP to be consistent with driver.

User-facing changes:
IREE_TARGET_BACKEND_ROCM to IREE_TARGET_BACKEND_HIP
iree-hal-target-backends=rocm to iree-hal-target-backends=hip

Signed-off-by: nithinsubbiah <nithinsubbiah@gmail.com>
@benvanik
Copy link
Collaborator

IIRC we didn't want to do this? (I think Lei told me ROCM and HIP were independent? did someone change their mind? :)

@nithinsubbiah
Copy link
Collaborator Author

They are indeed independent. We wanted to be consistent with the backend names and thus I started working on it. But happy to hear thoughts on moving forward
cc: @antiagainst @ScottTodd

@benvanik
Copy link
Collaborator

Mostly just thinking about HSA: it will be using the same codegen pipeline but not targeting HIP, so if there's a better name for what it is that'd be ideal ("ROCDL" may have been what Lei mentioned before). There will be a HIP IREE::HAL::TargetDevice and HSA IREE::HAL::TargetDevice that share the same codegen backend.

@stellaraccident
Copy link
Collaborator

I'm not sure there's a better answer. Both rocm and hip names refer to software stacked above. If we were naming from scratch, we would name like llvm: "amdgpu". From that perspective, I would rename the runtime driver "amdgpu-hip". That's what I call it in shortfin fwiw.

@qedawkins
Copy link
Contributor

Ok you all type too fast. +1 to amdgpu-hip, or maybe amdgpu-llvm. Either way it would be nice to switch away from "rocm."

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

FWIW, the bespoke clang is called hipcc too

@benvanik
Copy link
Collaborator

AMDGPU may work, but ROCm as the umbrella also makes sense (HIP and ROCr/HSA are both under the ROCm umbrella) and is equivalent to CUDA.

@benvanik
Copy link
Collaborator

(basically, I don't want inconsistency and I don't want the compiler side that targets multiple runtime implementations named after a particular runtime implementation)

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

AMDGPU alone is also the ultimate backend for spirv/vulkan with amdvlk. If they ever implement offline shader compilation, we will have a name collision with that.

@qedawkins
Copy link
Contributor

IIUC AMDGPU is the name of the target in llvm right? https://llvm.org/docs/AMDGPUUsage.html

So maybe amdgpu-llvm for clarity? I kind of like the idea of being consistent with the llvm backend we're targeting and also differntiating from spir-v.

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

IIUC AMDGPU is the name of the target in llvm right? https://llvm.org/docs/AMDGPUUsage.html

So maybe amdgpu-llvm for clarity? I kind of like the idea of being consistent with the llvm backend we're targeting and also differntiating from spir-v.

amdgpu is the name of the backend, the two targets are amdgcn--amdpal (graphics/vulkan) and amdgcn-amd-hsa (hip/rocm)

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

IMO hip and amdgpu-hip are the two least problematic names

@benvanik
Copy link
Collaborator

benvanik commented Sep 10, 2024

reminder: hip != hsa, and we'll be using the same thing for hsa - in my dreams we end up deleting our hip HAL - hip is problematic :)

(the name here is for us - it doesn't have to match LLVM or anyone else - it's just what we call our target)

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

OK, I give up, naming proves hard again 🙃

@benvanik
Copy link
Collaborator

yeah, we're in the ROC* name playground that is certifiably inscrutable - there's no good names :P

@nithinsubbiah
Copy link
Collaborator Author

nithinsubbiah commented Sep 10, 2024

What if we go with amdgpu-rocm for the compiler target thereby encompassing both hip and hsa runtime drivers, and also skirting the potential name clash with vulkan.

Unless someone has a strong reason to not have rocm keyword anywhere

@qedawkins
Copy link
Contributor

Also tangential question, could we have aliased names for target backends? e.g. if I put in hip or hsa as the compiler backend it would alias to amdgpu-rocm or whatever we choose to call it. If so I'd prefer to lean into a more llvm focused name because it makes the C++ very clear that it's going to hand the executables off to llvm.

@qedawkins
Copy link
Contributor

Also tangential question, could we have aliased names for target backends? e.g. if I put in hip or hsa as the compiler backend it would alias to amdgpu-rocm or whatever we choose to call it. If so I'd prefer to lean into a more llvm focused name because it makes the C++ very clear that it's going to hand the executables off to llvm.

My only problem with rocm is that I've never really understood what it is :P

@benvanik
Copy link
Collaborator

rocm == cuda (effectively) - an umbrella term for the entire suite of libraries/tools/languages/etc

@stellaraccident
Copy link
Collaborator

I think the stem can defensively be amdgpu or rocm. I prefer the first as it is more descriptive of a physical thing that will always be that thing.

@kuhar
Copy link
Member

kuhar commented Sep 10, 2024

Also tangential question, could we have aliased names for target backends? e.g. if I put in hip or hsa as the compiler backend it would alias to amdgpu-rocm or whatever we choose to call it. If so I'd prefer to lean into a more llvm focused name because it makes the C++ very clear that it's going to hand the executables off to llvm.

the llvm command line flag library supports aliases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants