-
Notifications
You must be signed in to change notification settings - Fork 214
Initial weights when training for new objects (Suggestion) #68
Comments
How long did you train your model with the new object-class? And how many epochs? |
In my experience your question is very dependent on the training parameters. In my case
Training time will vary widely depending on these (and probably other) parameters. In particular dataset size, as would be expected. I should mention that convergence is faster for the first 200 epochs, and after that all precision metrics increase slowly, but 85% precision on the 3D average distance of model vertices error (ADD) metric is reachable after approx. 500 epoch. |
Okay, interesting. How many images per class or are all 25000 images for 1 class only? I am also having troubles with the 3D model validation in valid.py. I always score 0% on the validation task that uses the 3D .ply model file. I am using the following model: colacan.ply. Maybe the .ply file is incorectly constructed? My model has many vertices while the LINEMOD .ply files only use 1 mesh. Could this have an effect? |
I created the dataset for single image pose estimation. I cannot really say if your .ply has any issues. I would however recommend you to use Blender to output the .ply model. If you have the original 3D model, maybe you can import it to Blender and export the .ply file from there. The actual toolchain I used for my case was:
The output .ply fly file is then readable using the MeshPly.py file in singlesho6dpose. The fact that I used 3 different SW for such a simple task is a mess, but I found no other way around. I must mention however that I did use the intermediate .FBX for other purposes not related to the .ply file, so it justified following such a process. On a side note, I have totally dropped the .ply file for testing. The .ply is read only for extracting the bounding cube corners, and store them in the corners3D array. You can see this in my fork, lines 78, 79, 80. Be aware corners3D is a 4x8 array, just that the final row is eight 1's and I am simply overwriting the rows 1 to 3 of the output from get_3D_corners(). You can simply print it to see the shape. |
Thanks for you response! I tested out your train2.py --> result My boundingbox seems to be incorrect. Are certain corners mapped to specific corners both in the annotated date and the boundingbox? As to my understanding only the centroid needs to be placed in the label.txt file as the first coordinate (x0,yo). The other 8 corner coordinates can be placed without keeping in mind the order? It would be great if you could hit me up at [email protected] Thanks in advance :) |
Thanks for your explanations @juanmed ! @jgcbrouns : The corner order is important as the predicted corners and the 3D bounding box should have correspondences. If the order of the predicted corners and the reference 3D bounding box corners are different, PnP won't return meaningful results. I would follow the order in https://github.com/Microsoft/singleshotpose/blob/master/label_file_creation.md to place the corners, i.e.,
|
@btekin You are very welcome! thanks for releasing this awesome implementation. I was exactly just writing an answer for this. Thanks for confirming it is important to follow the ordering. Maybe now you can check our other questions? hehe @jgcbrouns , the author already confirmed. In my case I did not tried any random ordering. I did verified that the ordering of all my labels was the same as for labels in the LINEMOD dataset. Then I trained and had no issues with "strange" bounding boxes. Also, I wrote you an email. I would, however, think its better to keep any conversation relevant to singleshot6dpose here, as other users might find it useful. |
Closing this as it is a duplicate of #75 |
Hello,
I have been using singleshotpose to train the network for single object detection. I have tried different initial weights and have got better results (faster convergence and higher accuracy) when starting from the init.weights in the /backup/benchvise/ directory, and without pretraining (not using yolo-pose-pre.cfg file).
The weights I have tried are:
darknet19_448.conv.23 , no pretraining:
darknet19_448.conv.23 , with pretraining: slowest, becuse needs pretraining and then final training (recomended in the Readme when training for new objects.)
benchvise/init.weights, no pretraining: best result
benchvise/model_backup.weights, no pretraining: similar result results to init.weights (just a bit slower)
I hope this helps anybody trying to use this awesome work for their own objects.
The text was updated successfully, but these errors were encountered: