LLVM Discussion Forums

Category for the AMDGPU target

Hi all,

I don’t know what the right process would be to talk about adding a new category to the Discourse group, but a topic on Discourse itself seems like a good start…

We’ve been having discussions over the last few weeks with stakeholders both inside and outside of AMD about where we could best have a dedicated and open discussion space for topics around the AMDGPU target, meaning both the backend itself and how it relates to the various frontends, e.g. Clang (for compute offloading), LLPC (for the AMDVLK Vulkan driver), and Mesa (for the radeonsi/radv graphics drivers).

The conclusion was that we’d like to try the use of a Discourse category. The “About” of the category:

Why should people use this category? What is it for?

This category is for discussions specific to both the development of the AMDGPU target in upstream LLVM and its use inside the LLVM project and by outside compiler frontends (e.g. LLPC, Mesa).

It is not a forum for end-user support. (If helpful souls in the community answer such questions, then that’s awesome and welcome! However, AMD cannot offer support through this channel – we want expectations to be clear on this.)

How exactly is this different than the other categories we already have?

It’s specific to the AMDGPU target :slight_smile:

What should topics in this category generally contain?

Good topics for this category include:

  • High-level discussions about longer term and larger scope projects within the AMDGPU target, i.e. discussions that are too large in scope for an individual Phabricator review.
  • Questions and discussions about the semantics of target-specific intrinsics, address spaces, ABI, etc.
  • Questions about best practices for front-end code generation.

Cc @chandlerc who probably is, or at least knows, who admins the Discourse space :slight_smile:

Cheers,
Nicolai

1 Like

Hi Nicolai,

I apologize for the late reply to this. I have added the AMDGPU category. Thank you for providing the description!

Meike

Thank you very much!