BARS is a project aimed for open BenchmArking for Recommender Systems: https://openbenchmark.github.io/BARS
Despite the significant progress made in both research and practice of recommender systems over the past two decades, there is a lack of a widely-recognized benchmark in this field. This not only increases the difficulty in reproducing existing studies, but also incurs inconsistent experimental results among them, which largely limit the practical value and potential impact of research in this field. In this project, we make our initiative efforts towards open benchamrking for recommender systems. The BARS benchmark project allows anyone to easily follow and contribute, and hopefully drive more solid and reproducible research on recommender systems.
The BARS benchmark currently covers the following two tasks.
- BARS-CTR: An Open Benchmark for CTR Prediction
- BARS-Match: An Open Benchmark for Candidate Item Matching
Ongoing projects:
- BARS-Rerank: An Open Benchmark for Listwise Reranking
- BARS-MTL: An Open Benchmark for Multi-Task Recommendation
If you find our benchmarks helpful in your research, please kindly cite the following paper.
Jieming Zhu, Quanyu Dai, Liangcai Su, Rong Ma, Jinyang Liu, Guohao Cai, Xi Xiao, Rui Zhang. BARS: Towards Open Benchmarking for Recommender Systems. The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022. [Bibtex]
We welcome any contribution that could help improve the BARS benchmark. Check the start guide on how to contribute.
If you have any questions or feedback about the BARS benchamrk, please open a new issue or join our WeChat group.