You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am experimenting with training different models using different configs. Right now I am using Reflow from v2. But this problem occur in every config/models that I training with.
I noticed that the inference audio from my model has worse quality than validation file from training. Why is that? What can I do to achive quality from validation data?
The explanation of the configuration file will be provided after the experiment. Based on the current situation, using a 1024-width, 6-depth LynxNet network can already yield good results.
The better performance in the validation results could be due to this being an in-domain behavior.
So I need to use LynxNet instead of WaveNet?
By width and depth you mean n_layers and n_chans?
I am using reflow and by default in config_v2_reflow it is setup with Wavenet so how should I adjust it to achive configs for good results with Reflow? Or should I use different config file?
I am experimenting with training different models using different configs. Right now I am using Reflow from v2. But this problem occur in every config/models that I training with.
I noticed that the inference audio from my model has worse quality than validation file from training. Why is that? What can I do to achive quality from validation data?
input:
https://drive.google.com/file/d/1pMjKUtAgZwDqWL8A3YlaWV4B7I1SECYu/view?usp=sharing
val:
https://drive.google.com/file/d/1Byz-Zeg0kCm75hzmRxiXlwHA5MZp3R2O/view?usp=sharing
infer:
https://drive.google.com/file/d/1hh4dwiodXsESe_Rv7S49nMGEcl3pZ7lc/view?usp=sharing
The text was updated successfully, but these errors were encountered: