This application holds test cases and calls argumentation microservices for the purpose of computing evaluation metrics. Thus, one needs to set up the corresponding services according to their documentation first. Then, we recommend to use docker to start this app.
docker-compose build
docker-compose up
The app is configured via Hydra.
To change the parameters, we recommend to create the file arguelauncher/config/app.local.yaml
and put overrides there.
For instance, to evaluate an adaptation approach:
defaults:
- app
- _self_
path:
requests: data/requests/microtexts-generalization
retrieval:
mac: false
fac: false
nlp_config: STRF
adaptation:
extras:
type: openai-chat-hybrid
More documentation will follow for the final version.
Requests may contain a ranking with a value between 1 and 3 for each query where 1 is the best and 3 is the worst value.