Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean CUDA operators #108

Merged
merged 1 commit into from
Oct 3, 2024
Merged

Clean CUDA operators #108

merged 1 commit into from
Oct 3, 2024

Conversation

lukem12345
Copy link
Member

@lukem12345 lukem12345 commented Oct 3, 2024

In advance of adding more CUDA versions and wrappers of DEC operators (possibly based off of KernelAbstractions.jl), it is necessary to clean up the existing code.

For example, the restrictions on dispatching CUDA.jl kernel functions led to some redundant code, now deleted. The dispatch of the regular and inverse Hodge stars was also not optimal (, not exploiting the abstract DiscreteHodge type for example). Docstrings have also been cleaned. The docstrings for the internal functions to the wedge product are no longer exposed.

@lukem12345 lukem12345 added the enhancement New feature or request label Oct 3, 2024
@lukem12345 lukem12345 self-assigned this Oct 3, 2024
ext/CombinatorialSpacesCUDAExt.jl Show resolved Hide resolved
ext/CombinatorialSpacesCUDAExt.jl Show resolved Hide resolved
ext/CombinatorialSpacesCUDAExt.jl Show resolved Hide resolved
@GeorgeR227
Copy link
Contributor

Please also let me know if the CUDA tests are passing after these changes.

@lukem12345 lukem12345 closed this Oct 3, 2024
@lukem12345 lukem12345 reopened this Oct 3, 2024
@lukem12345 lukem12345 merged commit 9cbba3c into main Oct 3, 2024
12 of 13 checks passed
@lukem12345 lukem12345 deleted the llm/clean_cuda branch October 3, 2024 22:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants