Skip to content

Commit

Permalink
Update website source
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 366828734
  • Loading branch information
jameswex authored and LIT team committed Apr 5, 2021
1 parent 1c776f7 commit a0046c7
Show file tree
Hide file tree
Showing 5 changed files with 19 additions and 17 deletions.
22 changes: 11 additions & 11 deletions docs/demos/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -96,6 +96,17 @@
<div class="demo-card-copy">Use LIT with any of three tasks from the General Language Understanding Evaluation (GLUE) benchmark suite. This demo contains binary classification (for sentiment analysis, using SST2), multi-class classification (for textual entailment, using MultiNLI), and regression (for measuringtext similarity, using STS-B).</div>
<div class="demo-card-cta-button"><a href="/lit/demos/glue.html"></a></div>
</div>
<div class="demo-card mdl-cell mdl-cell--6-col mdl-cell--4-col-tablet mdl-cell--4-col-phone">
<div class="demo-card-title"><a href="https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb" target="_blank">Notebook usage</a></div>
<div class="demo-card-tags"> <span class="demo-tag"> BERT </span> <span class="demo-tag"> binary classification </span> <span class="demo-tag"> notebooks </span>
</div>
<div class="demo-card-data-source-title">DATA SOURCES</div>
<div class="demo-card-data-source">
Stanford Sentiment Treebank
</div>
<div class="demo-card-copy">Use LIT directly inside a Colab notebook. Explore binary classification for sentiment analysis using SST2 from the General Language Understanding Evaluation (GLUE) benchmark suite.</div>
<div class="demo-card-cta-button"><a href="https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb"></a></div>
</div>
<div class="demo-card mdl-cell mdl-cell--6-col mdl-cell--4-col-tablet mdl-cell--4-col-phone">
<div class="demo-card-title"><a href="/lit/demos/coref.html" target="_blank">Gender bias in coreference systems</a></div>
<div class="demo-card-tags"> <span class="demo-tag"> BERT </span> <span class="demo-tag"> coreference </span> <span class="demo-tag"> fairness </span> <span class="demo-tag"> Winogender </span>
Expand Down Expand Up @@ -128,17 +139,6 @@
</div>
<div class="demo-card-copy">Use a T5 model to summarize text. For any example of interest, quickly find similar examples from the training set, using an approximate nearest-neighbors index.</div>
<div class="demo-card-cta-button"><a href="/lit/demos/t5.html"></a></div>
</div>
<div class="demo-card mdl-cell mdl-cell--6-col mdl-cell--4-col-tablet mdl-cell--4-col-phone">
<div class="demo-card-title"><a href="/lithttps://colab.research.google.com/github/pair-code/lit/blob/main/examples/notebooks/LIT_sentiment_classifier.ipynb" target="_blank">Using LIT in notebooks</a></div>
<div class="demo-card-tags"> <span class="demo-tag"> Colab </span> <span class="demo-tag"> notebooks </span> <span class="demo-tag"> BERT </span> <span class="demo-tag"> binary classification </span>
</div>
<div class="demo-card-data-source-title">DATA SOURCES</div>
<div class="demo-card-data-source">
Stanford Sentiment Treebank
</div>
<div class="demo-card-copy">Use LIT directly inside a Colab notebook. Explore binary classification for sentiment analysis from the General Language Understanding Evaluation (GLUE) benchmark suite.</div>
<div class="demo-card-cta-button"><a href="/lithttps://colab.research.google.com/github/pair-code/lit/blob/main/examples/notebooks/LIT_sentiment_classifier.ipynb"></a></div>
</div>
</div>

Expand Down
1 change: 1 addition & 0 deletions docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ <h3>Framework agnostic</h3>
<p>TensorFlow 1.x</p>
<p>TensorFlow 2.x</p>
<p>PyTorch</p>
<p>Notebook compatibility</p>
<p>Custom inference code</p>
<p>Remote Procedure Calls</p>
<p>And more...</p>
Expand Down
6 changes: 5 additions & 1 deletion docs/setup/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,8 @@ <h2>Install from source</h2>
<p><a name="demos"></a></p>
<h1>Run the included demos</h1>
<p>LIT ships with a number of demos that can easily be run after installation.</p>
<p>LIT can be started on the command line and then viewed in a web browser.</p>
<p>Alternatively, it can be run directly in a Colaboratory or Jupyter notebook and viewed in an output cell of the notebook.</p>
<h2>Quick-start: Classification and regression</h2>
<p>To explore classification and regression models tasks from the popular <a href="https://gluebenchmark.com/">GLUE benchmark</a>:</p>
<pre class="language-bash"><code class="language-bash">python -m lit_nlp.examples.glue_demo --port<span class="token operator">=</span><span class="token number">5432</span> --quickstart</code></pre>
Expand All @@ -110,11 +112,13 @@ <h2>Quick-start: Classification and regression</h2>
<a href="http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark">STS-B</a> or <a href="https://cims.nyu.edu/~sbowman/multinli/">MultiNLI</a> using the toolbar or the gear icon in
the upper right.</p>
<h2>Language modeling</h2>
<pre class="language-bash"><code class="language-bash">python -m lit_nlp.examples.pretrained_lm_demo <span class="token punctuation">\</span><br> --models<span class="token operator">=</span>bert-base-uncased --port<span class="token operator">=</span><span class="token number">5432</span></code></pre>
<pre class="language-bash"><code class="language-bash">python -m lit_nlp.examples.lm_demo <span class="token punctuation">\</span><br> --models<span class="token operator">=</span>bert-base-uncased --port<span class="token operator">=</span><span class="token number">5432</span></code></pre>
<p>In this demo, you can explore predictions from a pretrained language model (i.e. fill in the blanks).
Navigate to http://localhost:5432 for the UI.</p>
<h2>More examples</h2>
<p>The <a href="https://github.com/PAIR-code/lit/tree/main/lit_nlp/examples">examples</a> directory contains additional examples to explore, all of which can be run similarly to those above.</p>
<h2>Notebook usage</h2>
<p>A simple colab demo can be found <a href="https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb">here</a>. Just run all the cells to see LIT on an example classification model right in the notebook.</p>
<div class="spacer" style="height:50px;"></div>
<p><a name="custom"></a></p>
<h1>Use LIT on your own models and data</h1>
Expand Down
5 changes: 1 addition & 4 deletions website/src/demos.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ color: "#49596c"

{% include partials/demo-card c-title: "Notebook usage", link: "https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb",
c-data-source: "Stanford Sentiment Treebank"
c-copy: "Use LIT directly inside of a colab notebook. This simple demo shows a binary classifier for sentiment analysis using SST2.", tags: "BERT, binary classification, notebook", external:"true" %}
c-copy: "Use LIT directly inside a Colab notebook. Explore binary classification for sentiment analysis using SST2 from the General Language Understanding Evaluation (GLUE) benchmark suite.", tags: "BERT, binary classification, notebooks", external:"true" %}

{% include partials/demo-card c-title: "Gender bias in coreference systems", link: "/demos/coref.html",
c-data-source: "Winogender schemas", c-copy: "Use LIT to explore gendered associations in a coreference system, which matches pronouns to their antecedents. This demo highlights how LIT can work with structured prediction models (edge classification), and its capability for disaggregated analysis.", tags: "BERT, coreference, fairness, Winogender", external:"true" %}
Expand All @@ -29,7 +29,4 @@ color: "#49596c"

{% include partials/demo-card c-title: "Text generation", link: "/demos/t5.html",
c-data-source: "CNN / Daily Mail", c-copy: "Use a T5 model to summarize text. For any example of interest, quickly find similar examples from the training set, using an approximate nearest-neighbors index.", tags: "T5, generation", external:"true" %}

{% include partials/demo-card c-title: "Using LIT in notebooks", link: "https://colab.research.google.com/github/pair-code/lit/blob/main/examples/notebooks/LIT_sentiment_classifier.ipynb",
c-data-source: "Stanford Sentiment Treebank", c-copy: "Use LIT directly inside a Colab notebook. Explore binary classification for sentiment analysis from the General Language Understanding Evaluation (GLUE) benchmark suite.", tags: "Colab, notebooks, BERT, binary classification", external:"true" %}
</div>
2 changes: 1 addition & 1 deletion website/src/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ the upper right.
## Language modeling

```bash
python -m lit_nlp.examples.pretrained_lm_demo \
python -m lit_nlp.examples.lm_demo \
--models=bert-base-uncased --port=5432
```

Expand Down

0 comments on commit a0046c7

Please sign in to comment.