diff --git a/README.md b/README.md index fd08cd5..c89708a 100644 --- a/README.md +++ b/README.md @@ -26,15 +26,27 @@ The above data archive mainly contains the following resource files: - **Type Embedding**: Adopted to compute type similarity between mention-entity pairs. We trained these type embedding using a typing system called [NFETC](https://arxiv.org/abs/1803.03378) model. -- **Wikipedia inLinks and outLinks**: Surface names of inlinks and outlinks for a Wikipedia page (entity) are used to construct **dynamic context** in our model learning process. +- **Wikipedia inLinks**: Surface names of inlinks for a Wikipedia page (entity) are used to construct **dynamic context** in our model learning process. ## Installation Requirements: Python 3.5 or 3.6, Pytorch 0.3, CUDA 7.5 or 8 ## Important Parameters + ``` -mode: train or eval -order: three decision orders, that is, 1) *Offset* links all mentions by their natural orders in the original documen +mode: train or eval mode. + +order: three decision orders -- offset / size / random. Please refer to our paper for their concrete definition. + +n_cands_before_rank: the number of candidates, the default value is 30. + +tok_top_n4inlink: the number of inlinks for a Wikipedia page (entity) would be considered as candidates for the dynamic context. + +tok_top_n4ent: the number of inlinks for a Wikipedia page (entity) would be added into the dynamic context. + +isDynamic: 2-hop DCA / 1-hop DCA / without DCA. Corresponding to the Table 4 in our paper. + +dca_method: soft+hard attention / soft attention / average sum. Corresponding to the Table 5 in our paper. ``` ## Running