Skip to content
/ JoCaD Public

A novel robust learning paradigm called Joint training by combining Consistency and Diversity (JoCaD).

Notifications You must be signed in to change notification settings

04756/JoCaD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

JoCaD

The artical has been published, welcome to cite it. Yang, H., Yin, H., Yang, Z. et al. JoCaD: a joint training method by combining consistency and diversity. Multimed Tools Appl 83, 64573–64589 (2024). https://doi.org/10.1007/s11042-024-18221-z

If you want to have the full code, please contact me or Corresponding author.

Abstract

Noisy labels due to mistakes in manual labeling or data collecting are challenging for the expansion of deep neural network applications. Current robust network learning methods such as Decoupling, Co-teaching, and Joint Training with Co-Regularization are very promising for learning with noisy labels, yet the coordination between consistency and diversity is not fully considered which is crucial for the performance of the model. To tackle this issue, a novel robust learning paradigm called Joint training by combining Consistency and Diversity (JoCaD) is proposed in this paper. The JoCaD is devoted to maximize the prediction consistency of the networks while keeping enough diversity on their representation learning. Specifically, aiming to reconcile the relationship between consistency and diversity, an effective implementation is proposed which dynamically adjusts joint loss to boost the model learning with noisy labels. The extensive experimental results on MNIST, CIFAR-10, CIFAR-100, and Clothing1M demonstrate that our proposed JoCaD has better performance than some representative SOTA methods.

Data

The MNIST and CIFAR datasets are publicly available, so we have written the data download method into the code and simply run the code to download the dataset. As for clothing1M, please contact [email protected] to request the download link.

The setting of Tg on benchmark datasets from symmetrical 20% to 80% and asymmetrical 40% (MNIST, CIFAR-10, CIFAR-100 and Clothing1M)

For MNIST, Tg is 10, 10, 10 and 25. For CIFAR-10, Tg is 5, 10, 20 and 20. For CIFAR-100, Tg is 10, 10, 25 and 20. For Clothing1m, Tg is 10.

Running JoCaD on benchmark datasets (MNIST, CIFAR-10, CIFAR-100 and Clothing1M)

Here is an example:

python main.py --dataset cifar100 --noise_type symmetric --noise_rate 0.8 --co_lambda 0.85 --beta 0.01 --turning_point 25

About

A novel robust learning paradigm called Joint training by combining Consistency and Diversity (JoCaD).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages