-
Notifications
You must be signed in to change notification settings - Fork 159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code not working #39
Comments
Make sure the length of your video and audio are same. |
Hey, |
U can solve your problems by using ffmpeg |
I solved the problem by resizing he frames of the video to 224x224. |
Hello, I have same issue, would you tell me how you fixed it? @samyak0210 |
@EhsanRusta maybe you shuld resize your video frame to 224x224, just like the example.avi |
Actually Full pipeline:
Here |
and how to filter dataset for wav2lip? |
can you share some methods for preprocessing wav2lip datasets with this project? thank you . |
just change for fname in flist:
images.append(cv2.imread(fname)) to for fname in flist:
images.append(cv2.resize(cv2.imread(fname), (224, 224))) the model was not meant to work with other shapes |
请问你知道如何过滤数据集了吗 |
Hey, were you able to filter dataset? |
Hello,
I was using your code for a video but it was giving an error while running demo_syncnet.py file. It ran fine for the example.avi but is not running for my video. Can you help me?
The text was updated successfully, but these errors were encountered: