Current intrinsic issues in the AL system and the solutions. These issues are not from the changes in assumptions but from the nature of AL.
The selection of AL introduces bias to the system.
- On Statistical Bias In Active Learning: How and When to Fix It [2021, ICLR]: Active learning can be helpful not only as a mechanism to reduce variance as it was originally designed, but also because it introduces a bias that can be actively helpful by regularizing the model.
- Addressing Bias in Active Learning with Depth Uncertainty Networks... or Not [2021, Arxiv]: Reducing "AL bias" doesn't bring improvement on the low "overfitting bias" model DUN. When eliminate the "AL bias" with importance weights, we always pay the price of additional variance ("overfitting bias").
Thus, several models try to solve the overfitting at the beginning to improve the overall performance. It could be done by designing an adaptive loss.
- On Statistical Bias In Active Learning: How and When to Fix It [2021, ICLR]
- Depth Uncertainty Networks for Active Learning [2021, NeurIPS]
- Towards Dynamic and Scalable Active Learning with Neural Architecture Adaption for Object Detection [2021, BMVC]: Add NAS into the AL loops.
Resample the selected uncertainty data based on feature matching to alleviate the problem of data bias.
Resample:
- Unsupervised Fusion Feature Matching for Data Bias in Uncertainty Active Learning [2022, TNNLS]
- Energy-based Out-of-distribution Detection [2020, NeurIPS]
- Active Incremental Learning for Health State Assessment of Dynamic Systems With Unknown Scenarios [2022, IEEE Transactions on Industrial Informatics]