Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
-
Updated
Jan 27, 2023 - Python
Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
WSDM'22 Best Paper: Learning Discrete Representations via Constrained Clustering for Effective and Efficient Dense Retrieval
code and data to faciliate BERT/ELECTRA for document ranking. Details refer to the paper - PARADE: Passage Representation Aggregation for Document Reranking.
This is our solution for KDD Cup 2020. We implemented a very neat and simple neural ranking model based on siamese BERT which ranked first among the solo teams and ranked 12th among all teams on the final leaderboard.
This is our solution for WSDM - DiggSci 2020. We implemented a simple yet robust search pipeline which ranked 2nd in the validation set and 4th in the test set. We won the gold prize at innovation track and bronze prize at dataset track.
An easy-to-use python toolkit for flexibly adapting various neural ranking models to any target domain.
Deep Learning Hard (DL-HARD) is a new annotated dataset extending TREC Deep Learning benchmark.
CODEC is a document and entity ranking dataset that focuses on complex essay-style topics.
Code repository of the NAACL'21 paper "CoRT: Complementary Rankings from Transformers"
Efficient interpolation-based ranking on CPUs
Establishing Strong Baselines for TripClick Health Retrieval; ECIR 2022
Add a description, image, and links to the neural-ranking topic page so that developers can more easily learn about it.
To associate your repository with the neural-ranking topic, visit your repo's landing page and select "manage topics."