The Academic Benchmark is designed to facilitate research in computer science (CS). The major purpose is to change the situation in CS where most published methods are difficult to compare with due to the lack of public dataset, open codes, or clear experimental settings. Therefore, in this website, we not only report the performances of state-of-the-art algorithms in different domains, but also collect the corresponding datasets, codes, and scripts that can make the experimental results reproducible. When developing new methods in these domain, researchers can simply take-away and compare with all the available baseline methods. In this way, we can help researchers spend less time on duplicated efforts, and focus more on new ideas.


Benchmark for NeuIR was released in November 2016.
Benchmark for word representation was released in May 2015.
Benchmark for community detection was released in May 2015.
Benchmark for Learning to rank was released in January 2015.
Benchmark for recommendation was released in January 2015.