-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add dlpack support #1306
base: master
Are you sure you want to change the base?
Add dlpack support #1306
Conversation
I don't know much about dlpack, but I'm quite sure this won't be merged if
|
This is still a draft. The api of dlpark is still evolving and I'll release v0.3.0 this week. I want to utilize the ownership system to make sure dlpack can only be consumed once, otherwise there will be double free error. And thanks for your advice, I will add the feature gate. |
unsafe impl<A> Send for ManagedRepr<A> where A: Send {} | ||
|
||
impl<A> FromDLPack for ManagedArray<A, IxDyn> { | ||
fn from_dlpack(dlpack: NonNull<dlpark::ffi::DLManagedTensor>) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function takes a raw pointer (wrapped in NonNull) and it must be an unsafe function, otherwise we can trivially violate memory safety unfortunately.
The only way to remove this requirement - the requirement of using unsafe
- would be if you have a "magical" function that can take an arbitrary pointer and say whether it's a valid, live, non-mutably aliased pointer to a tensor.
Here's how to create a dangling bad pointer: NonNull::new(1 as *mut u8 as *mut dlpark::ffi::DLManagedTensor)
does this code crash if we run with this pointer? I think it would..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with you. from_dlpack
should be unsafe, and users should use it at their own risk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say, we normally don't commit to public dependencies that are not stable (yes, not a very fair policy since ndarray itself is not so stable.), and dlpark is a public dependency here because it becomes part of our API. It could mean it takes a long time between version bumps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we don't need to include dlpark
as a dependency. We can create an ArrayView
using ArrayView::from_shape_ptr
and ManagedTensor
. I can implement ToTensor
for ArrayD
in dlpark
with a new feature ndarray
. I'll do some quick experiments.
|
||
let strides: Vec<usize> = match (managed_tensor.strides(), managed_tensor.is_contiguous()) { | ||
(Some(s), _) => s.into_iter().map(|&x| x as _).collect(), | ||
(None, true) => managed_tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Later work, check compatibility of dlpack and ndarray strides, how they work, their domains etc.
DLPack is an open in-memory tensor structure for sharing tensors among frameworks. DLPack enables
Supporting
dlpack
will make it easier to share tensors withdfdx
,tch-rs
and evennumpy
,torch
in Python (withpyo3
feature).