Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rag Fusion #51

Open
DelaramRajaei opened this issue Feb 14, 2024 · 3 comments
Open

Rag Fusion #51

DelaramRajaei opened this issue Feb 14, 2024 · 3 comments
Assignees
Labels
enhancement New feature or request experiment

Comments

@DelaramRajaei
Copy link
Member

This is the issue where I log the progress of reg fusion in RePair project.

@DelaramRajaei DelaramRajaei added enhancement New feature or request experiment labels Feb 14, 2024
@DelaramRajaei DelaramRajaei self-assigned this Feb 14, 2024
@DelaramRajaei
Copy link
Member Author

Hello @hosseinfani,

I wanted to provide you with an update on the project.

I've uploaded all the recent code updates to the NQLB branch on GitHub. You can find them here.

Additionally, I've included the results from the robust04 datasets on rag fusion. You can access them through this link.

The comparison column is based on different categories:

  • all: original vs all refiners vs rag_fusion_all vs rag_fusion_global vs rag_fusion_local vs rag_fusion_bt
  • global: original vs global refiners vs rag_fusion_global vs rag_fusion_bt
  • local: original vs local refiners vs rag_fusion_local vs rag_fusion_bt
  • bt: original vs bt refiner vs rag_fusion_bt

The analysis covers two IR rankers (BM25 and QLD) and evaluation metrics (MAP, MRR, NDCG).

The "#" indicates the number of refined queries that achieved the best evaluation score within each category of refiners.

Currently, I'm experimenting with other datasets. In the meantime, I'm considering focusing on comparing the original evaluation results with the rag fusion in each category.

I've also created a chart to visualize the results. I would appreciate your suggestions on how to improve it further.

@hosseinfani
Copy link
Member

Hi @DelaramRajaei ,
Thanks for the update and the code merge!

Looking at the excel sheet, I'm not sure I could understood the comparison. Are you available today (Thursday) to have a quick meeting?

Just a quick request: your results are spread over different gsheets, right? Would you please come up with a better management of the result files? We'll talk :)

@DelaramRajaei
Copy link
Member Author

Hi @hosseinfani,

Yes, I am available online today, at any time.

I've been having trouble finding a good solution for the chart.
As for the tables, each Google Sheet represents a dataset.
Another way to compare them can be by comparing datasets that have the same ranker.metric values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request experiment
Projects
None yet
Development

No branches or pull requests

2 participants