Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
YoungXiyuan authored Sep 14, 2019
1 parent b990c96 commit ef9c847
Showing 1 changed file with 15 additions and 3 deletions.
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,27 @@ The above data archive mainly contains the following resource files:

- **Type Embedding**: Adopted to compute type similarity between mention-entity pairs. We trained these type embedding using a typing system called [NFETC](https://arxiv.org/abs/1803.03378) model.

- **Wikipedia inLinks and outLinks**: Surface names of inlinks and outlinks for a Wikipedia page (entity) are used to construct **dynamic context** in our model learning process.
- **Wikipedia inLinks**: Surface names of inlinks for a Wikipedia page (entity) are used to construct **dynamic context** in our model learning process.

## Installation
Requirements: Python 3.5 or 3.6, Pytorch 0.3, CUDA 7.5 or 8

## Important Parameters

```
mode: train or eval
order: three decision orders, that is, 1) *Offset* links all mentions by their natural orders in the original documen
mode: train or eval mode.
order: three decision orders -- offset / size / random. Please refer to our paper for their concrete definition.
n_cands_before_rank: the number of candidates, the default value is 30.
tok_top_n4inlink: the number of inlinks for a Wikipedia page (entity) would be considered as candidates for the dynamic context.
tok_top_n4ent: the number of inlinks for a Wikipedia page (entity) would be added into the dynamic context.
isDynamic: 2-hop DCA / 1-hop DCA / without DCA. Corresponding to the Table 4 in our paper.
dca_method: soft+hard attention / soft attention / average sum. Corresponding to the Table 5 in our paper.
```

## Running
Expand Down

0 comments on commit ef9c847

Please sign in to comment.