-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generating My Testing Data #11
Comments
I have been generating training data recently, but I have made no progress. How is the solution? @BetterLYY |
如果你的问题是生成npy文件然后做detection,那你可以分别把npy文件中包含的四种数据分别存储到txt文件中,在他的源码中分别读取这些文件,而不是纠结一定要生成他说的npy文件。 |
主要是 我怕弄错这几个数据的生成规则,我写了一版(从realsense获取然后组成和numpy类似的格式数据),但是感觉还是不太对劲,你这块写好了吗? |
我跑离线的,所以没有做这个工作蛤。 |
所以你只跑了作者提供的数据,但是自己的真实数据没有试过,是吗? |
我用的自己的数据,代码里面主程序里可以明确看到从npy文件中读取了四种数据,你把这四种数据分别输入就行了 |
老哥,我就是问这些数据怎么生成,我自己生成了一些,拼凑了4个dict数据,但是感觉不太对。你可以介绍下吗?尤其是smoothed_object_pc |
@BetterLYY |
这一块也是我问过作者的,但是没有得到回复。在我看来这个内容就是目标的三维点云,需要自行分割出来。 |
好的,那我自己解决下。现在realsense采集后,我自行做了很多pre process再扔进网络,我总感觉我的处理有些问题~ |
@BetterLYY |
|
|
我用的kinect,直接用的npy文件中给的相机参数,你也可以参考楼上的建议 |
@BetterLYY |
Ok, now there's a question. I'm not sure what the original NP meant by providing K parameter values for the different latitudes. |
@imdoublecats |
|
K has been obtained in the format of Intrinsics and assembled, and the generated grasp is offset. Now we do not know how to deal with it... . |
@gzchenjiajun |
|
And now I only take one frame of point cloud data for the convenience of testing, I don't know if there is any problem... . |
Thank you for your answers |
@gzchenjiajun Visualize the data in npy and compare it with what you get from realsense may help. |
link:https://pan.baidu.com/s/1yLL7yYBMcdTWV1UThTlGTg password:iz12 |
I'm really in trouble now. |
Here are the stages you need to follow to generate grasps for new scenes and execute with the robot.
The model in this repo generates grasps for the target object regardless of the clutter. Grasps may collide with other objects, if you want to remove those grasps you can implement collisionnet from here: https://arxiv.org/abs/1912.03628 [optional[ Regarding the smooth point cloud: since depth images from realsense are noisy, one can smooth the depth images by averaging 10 consecutive frames and removing the pixels with jittery depth. That helps smoothing the performance. Even without smoothing you should be able to get comparable results to the provided examples. For realsense cameras, intrinsics can be extracted from the realsense API or ros driver itself. |
@arsalan-mousavian |
@arsalan-mousavian |
This step already Success |
hi @gzchenjiajun ,i already run the [email protected]:NVlabs/UnseenObjectClustering.git successfully,but how can I get the corresponding segmented data, such as segmented depth and image |
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
|
Hello, I'm interested in your 6DOF Graspnet project and trying to run this code. Recently, I have some question in generating the new data just like that In the folder demo/data, the provided data (.npy files) contains depth , image , smoothed_object_pc and intrinsics_matrix, but I have troubles in generating those .npy files with above data? Could you give me some instructions of how to generating the .npy files used in this code. Besides, I can use Kinect to get depth images now. I would appreciate for your instructions, thanks !!!!
The text was updated successfully, but these errors were encountered: