Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal change #1515

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 5 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@ git clone https://github.com/PAIR-code/lit.git && cd lit
```


Note: be sure you are running Python 3.10. If you have a different version on
your system, use the `conda` instructions below to set up a Python 3.10
Note: be sure you are running Python 3.10. If you have a different version on
your system, use the `conda` instructions below to set up a Python 3.10
environment.

Set up a Python environment with `venv`:
Expand Down Expand Up @@ -142,13 +142,13 @@ To explore classification and regression models tasks from the popular
[GLUE benchmark](https://gluebenchmark.com/):

```sh
python -m lit_nlp.examples.glue_demo --port=5432 --quickstart
python -m lit_nlp.examples.glue.demo --port=5432 --quickstart
```

Or, using `docker`:

```sh
docker run --rm -e DEMO_NAME=glue_demo -p 5432:5432 -t lit-nlp --quickstart
docker run --rm -e DEMO_NAME=glue -p 5432:5432 -t lit-nlp
```

Navigate to http://localhost:5432 to access the LIT UI.
Expand All @@ -160,19 +160,6 @@ but you can switch to
[STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) or
[MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) using the toolbar or the
gear icon in the upper right.

### Quick-start: language modeling

To explore predictions from a pre-trained language model (BERT or GPT-2), run:

```sh
python -m lit_nlp.examples.lm_demo --models=bert-base-uncased --port=5432
```

Or, using `docker`:

```sh
docker run --rm -e DEMO_NAME=lm_demo -p 5432:5432 -t lit-nlp --models=bert-base-uncased
```

And navigate to http://localhost:5432 for the UI.
Expand All @@ -192,7 +179,7 @@ See [lit_nlp/examples](./lit_nlp/examples). Most are run similarly to the
quickstart example above:

```sh
python -m lit_nlp.examples.<example_name> --port=5432 [optional --args]
python -m lit_nlp.examples.<example_name>.demo --port=5432 [optional --args]
```

## User Guide
Expand Down
6 changes: 3 additions & 3 deletions lit_nlp/examples/gunicorn_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Config for gunicorn for cloud-hosted demos."""
"""gunicorn configuration for cloud-hosted demos."""

import os

_DEMO_NAME = os.getenv('DEMO_NAME', 'glue_demo')
_DEMO_NAME = os.getenv('DEMO_NAME', 'glue')
_DEMO_PORT = os.getenv('DEMO_PORT', '5432')

bind = f'0.0.0.0:{_DEMO_PORT}'
timeout = 3600
threads = 8
worker_class = 'gthread'
wsgi_app = f'lit_nlp.examples.{_DEMO_NAME}:get_wsgi_app()'
wsgi_app = f'lit_nlp.examples.{_DEMO_NAME}.demo:get_wsgi_app()'
5 changes: 4 additions & 1 deletion lit_nlp/examples/tydi/demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@

from absl import app
from absl import flags
from absl import logging
from lit_nlp import dev_server
from lit_nlp import server_flags
from lit_nlp.components import word_replacer
Expand Down Expand Up @@ -40,7 +41,9 @@ def get_wsgi_app() -> Optional[dev_server.LitServerType]:
# Parse flags without calling app.run(main), to avoid conflict with
# gunicorn command line flags.
unused = flags.FLAGS(sys.argv, known_only=True)
return main(unused)
if unused:
logging.info("glue_demo:get_wsgi_app() called with unused args: %s", unused)
return main([])


def main(argv: Sequence[str]) -> Optional[dev_server.LitServerType]:
Expand Down
10 changes: 5 additions & 5 deletions website/sphinx_src/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,13 @@ below.
```shell
# DEMO_NAME is used to complete the Python module path
#
# "lit_nlp.examples.$DEMO_NAME"
# "lit_nlp.examples.$DEMO_NAME.demo:get_wsgi_app()"
#
# Therefore, valid values for DEMO_NAME are Python module paths in the
# lit_nlp/examples directory, such as
#
# * direct children -- glue_demo, lm_demo, image_demo, t5_demo, etc.
docker run --rm -p 5432:5432 -e DEMO_NAME=lm_demo lit-nlp
# * direct children -- glue, penguin, tydi, etc.
docker run --rm -p 5432:5432 -e DEMO_NAME=peng lit-nlp

# Use the DEMO_PORT environment variable as to change the port that LIT uses in
# the container. Be sure to also change the -p option to map the container's
Expand All @@ -66,8 +66,8 @@ docker run --rm -p 2345:2345 -e DEMO_PORT=2345 lit-nlp
# containers on your machine using the combination of the DEMO_NAME and
# DEMO_PORT arguments, and docker run with the -d flag to run the container in
# the background.
docker run -d -p 5432:5432 -e DEMO_NAME=t5_demo lit-nlp
docker run -d -p 2345:2345 -e DEMO_NAME=lm_demo -e DEMO_PORT=2345 lit-nlp
docker run -d -p 5432:5432 -e DEMO_NAME=penguin lit-nlp
docker run -d -p 2345:2345 -e DEMO_NAME=tydi -e DEMO_PORT=2345 lit-nlp
```

## Integrating Custom LIT Instances with the Default Docker Image
Expand Down
Loading