Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to implement a custom operator that support multiple compute device (CPU, CUDA)? #23317

Open
wangxianliang opened this issue Jan 10, 2025 · 0 comments
Labels
ep:CUDA issues related to the CUDA execution provider

Comments

@wangxianliang
Copy link

Ask a Question

I tried the following implementation, but had no effect.

CUDA implementation:

struct CustomOPGpu : Ort::CustomOpBase<CustomOPGpu , CustomKernel> {
		const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CUDAExecutionProvider"; };
...
}

CPU implementation:

struct CustomOPCpu : Ort::CustomOpBase<CustomOPCpu , CustomKernel> {
		const char* GetName() const { return "CustomOP"; };
const char* GetExecutionProviderType() const { return "CPUExecutionProvider"; };
...
}

The doc (https://onnxruntime.ai/docs/reference/operators/add-custom-op.html) doesn't have any sample codes.

Question

Further information

  • Relevant Area:

  • Is this issue related to a specific model?
    Model name:
    Model opset:

Notes

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider
Projects
None yet
Development

No branches or pull requests

1 participant