-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unclear how to use the code #5
Comments
Hi @r-barnes , I was able to run phase 1 using
I had to fix a few small issues, though and used python 3 instead of python 2. Furthermore, I had to resize the images first using the provided In @arashno |
Thanks @JohannesBrand : I'll give that a try, though I still think the documentation on this project should be expanded. |
@r-barnes I also have problems running pre-trained models with images I got. I wonder if you have figured out any clear ways to input the images. Thanks! |
Hi All, |
Yes, 0 means empty and 1 means animal. |
Thank you for sharing this repo. We would like to use the phase1 model to make predictions of animal vs. no animal in new images. Initially we intend to make predictions without fine-tuning, so our input is images without labels. Therefore the recommended repo (https://github.com/arashno/tensorflow_multigpu_imagenet) does not seem to fit our application. In the recommended repo, data_info.txt includes labels for each image, whereas in our case we do not have labels but are rather interested in predicting the labels using the phase1 model. We have loaded the phase1 model using the code below, but we are new to tensorflow and do not know how to use the model to make predictions on new images. Any advice (especially additional code to make predictions) would be much appreciated! Thanks! import os #script_dir = os.path.dirname(file) #<-- absolute dir the script is in print(abs_file_path_meta) config = tf.ConfigProto(allow_soft_placement=True) |
There are two solutions: 1- in this repo (Evolving-AI-Lab/deep_learning_for_camera_trap_images), provide fake labels (for example, all empty or all full or even random labels) and then run the evaluation (i.e. python eval.py ...) of the phase 1 model over the provided labels. Then, in the output file, disregard the fake labels and take out the model predictions only. 2- The recommended repo (arashno/tensorflow_multigpu_imagenet) now support "inference" (prediction), you will need to run a command like this: python run.py inference preds.txt --log_dir path/to/downloaded/phase1/weights --path_prefix path/to/preprocessed/images --data_info data_info.txt ... Please let me know if any part of the explanation is unclear or you have any trouble. |
Thanks for the helpful reply, @arashno! We tried solution 1. We get a syntax error in eval.py. I checked that we are able to import datetime in python in the active environment, so that does not seem to be the problem. I guess I may be missing something that will seem obvious once you've pointed it out! Thanks again for troubleshooting. In C:/Users/etc/Documents/R/bats/phase1, we are not sure whether we have the weights. We have checkpoint, snapshot-55.data-00000-of-00001, snapshot-55.index, and snapshot-55.meta. Our data_info.txt reads: Here is the output we get: |
snapshot-55.data-00000-of-00001 contains the weights. Your data_info should be like this: Bat_licking_DPS - Copy.mov.jpg 1 The code will add the value of --path_prefix argument to the path of all images. I am confused, you mentioned that you were able to fix the syntax error, so what error are you getting now? Line 7 means importing the datatime module. |
Hi @arashno -- Thanks for this explanation and guidance. The invalid syntax error occurred because we had downloaded html file rather than eval.py file. We have solved this issue. Now we are getting a different error: (r-reticulate) C:\Users\Documents\R\bats>python eval.py preds.txt --num_threads 4 --architecture vgg --log_dir C:/Users//Documents/R/bats/phase1 --path_prefix C:/Users//Documents/R/bats/jpg --data_info data_info.txt Having looked at read_label_file in data_loader, it’s not clear what this error is about. Again, thanks a ton for your help! We appreciate any further advice. |
It seems to be a delimiter problem. Bat_licking_DPS - Copy.mov.jpg,1 |
Hi @arashno, thank you for pointing this out! We changed data_info.txt as you recommended. We really appreciate your help. We are now getting a different error that we again can’t figure out: (r-reticulate) C:\Users\Documents\R\bats>python eval.py preds.txt --num_threads 4 --architecture vgg --log_dir C:/Users//Documents/R/bats/phase1 --path_prefix C:/Users//Documents/R/bats/jpg --data_info data_info.txt We found this site (https://blog.csdn.net/Felaim/article/details/84098986) that (according to collaborator who reads Chinese) suggests a solution would involve adding a line to common.py: is_training = tf.cast(True, tf.bool) But we don’t know where to try adding this. Thanks for taking a look at this and any advice on a solution! |
It seems that there is a version incompatibility. What is your Tensorflow version? |
I was using a mix of the two repos. That makes sense that version incompatibility would result -- my mistake. Using only this repo, I get this error: (r-reticulate) C:\Users\Documents\R\bats\deep_learning_for_camera_trap_images-master>python eval.py preds.txt --num_threads 4 --architecture vgg --log_dir C:/Users//Documents/R/bats/deep_learning_for_camera_trap_images-master/phase1 --path_prefix C:/Users//Documents/R/bats/deep_learning_for_camera_trap_images-master/jpg --data_info data_info.txt Here are the Tensorflow versions and other packages in the environment:
packages in environment at C:\Users\fischhoffi\AppData\Local\conda\conda\envs\r-reticulate:Name Version Build Channel_tflow_select 2.1.0 gpu anaconda Would you recommend using this repo or the recommended repo? Thanks again! |
Although the other repository is compatible with Python 3, this repository only works with Python 2.7. |
I just spent a day figuring out how to run the pre-trained models. Here a few things that I learned that might be useful for others:
While Phase 1 and Phase 2 Recognition Only work fine I still have not been able to run Phase 2. Will write another post with the errors I am getting. It would be nice if the authors could provide a small test dataset with all the input files and commands to run each phase. Would probably save a lot of people a lot of time. That said, thanks for making the code and pre-trained models available! |
For Phase 2 I created a data_info.txt file with the image name plus 9 extra columns with all 0:
|
Thanks for letting us know @arashno! |
@Mo-nasr : That might do better as a separate question/issue. |
I agree with @matobler :
|
Have been running into similar issues as mentioned by previous people. Took the fork of Mo-nasr and adopted it, thanks for that! Follow the updated read-me to get it running. New features:
I will try to clean this up and convert more and more eventually. Right now, I am not seeing very good classification results though, the only thing that is classified correctly are elephants. |
Despite looking at the recommended repo, it's still a little unclear how to use this.
An example include an appropriately-formatted input file and a couple of example images would go along way towards making this useful to others.
The text was updated successfully, but these errors were encountered: