Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors while trying to run the examples #1501

Open
aneeshmb02 opened this issue Jun 20, 2024 · 16 comments
Open

Errors while trying to run the examples #1501

aneeshmb02 opened this issue Jun 20, 2024 · 16 comments

Comments

@aneeshmb02
Copy link

aneeshmb02 commented Jun 20, 2024

I am trying to run the examples just to make sure it works. I am running into this error: ImportError: cannot import name 'layout' from 'lit_nlp.api'. Why is it happening and what can I do?

@RyanMullins
Copy link
Member

How have you installed LIT in your environment, via the repo or the pip package?

@aneeshmb02
Copy link
Author

Using pip. And I had to install 0.4.1 because with the newer version I was getting another error (type object not subscriptable)

@RyanMullins
Copy link
Member

For reference v0.4.1 is 2.5 years old and definitely not recommended for use by the team, so I'm very curious what's going on here and would strongly prefer to help you get on the mainline release.

Can you say more about your Python environment and setup...

Also, any error logs you can provide would be helpful.

@aneeshmb02
Copy link
Author

aneeshmb02 commented Jun 21, 2024

Initially I got the error TypeError: 'type' object is not subscriptable. When I looked it up in issues, I found #976 and it mentioned installing v0.4.1 so I did. It seemed to resolve that issue but cause another one. Turns out python being outdated was the issue all along.
Now I am using python3.11 (Ubuntu 20.04, Intel cpu).
I am not using a notebook. Simply trying to run the .py (Using python itself though, couldn't get blaze to work)
But now I am getting the following error : AttributeError: module 'keras' has no attribute 'config'.

@aneeshmb02 aneeshmb02 changed the title ImportError: cannot import name 'layout' from 'lit_nlp.api' Errors while trying to run the examples Jun 21, 2024
@RyanMullins
Copy link
Member

Sounds like you're trying to use our LM Salience demo?

That Keras issues means you need to force upgrade the library to v3; tensorflow v2.15 installs Keras v2 by default but v3 is required for that demo (and iirc is the only demo we have that requires v3).

pip install -U keras keras-nlp

@aneeshmb02
Copy link
Author

I am still getting the same error. But maybe I do not need to run this demo at all.
My objective is to use LIT to analyze a couple of llms and explain their output for particular inputs. I also need to compare variously quantized versions of those models. Can you please guide me? (I am an inexperienced intern so please excuse me)

@RyanMullins
Copy link
Member

Would it be helpful to schedule a video call to talk through the LIT setup? You should be able to see my email on my profile page. Send me an email and we can figure out a time.

@wandabwa2004
Copy link

I'm facing issues too @RyanMullins. For example from lit_nlp.examples.datasets import lm as lm_data and from lit_nlp.examples.models import instrumented_keras_lms return ModuleNotFoundError: No module named 'lit_nlp.examples.datasets' errors. The modules are missing in the examples folder in this repo. Would you please guide on this?

@bdu91
Copy link
Collaborator

bdu91 commented Sep 12, 2024

The datasets and models are moved.

from lit_nlp.examples.datasets import lm as lm_data -> from lit_nlp.examples.prompt_debugging import datasets as lm_data.

from lit_nlp.examples.models import instrumented_keras_lms -> from lit_nlp.examples.prompt_debugging import keras_lms

Where did you find this line of code that causes the error? We should fix this if it is still there.

@RyanMullins
Copy link
Member

@bdu91 I have the fix out for review now. Should be live on DevSite tomorrow morning.

@RyanMullins
Copy link
Member

RyanMullins commented Sep 12, 2024

@wandabwa2004 the docs are now updated (with significantly fewer steps) for LIT v1.2. Let me know if you run into further issues.

@wandabwa2004
Copy link

Hello @RyanMullins . The updated one works well on Google Colab. However, I've been trying to set it up locally and I've run into issues and and specifically with building front end files. No error is shown but nothing is shown as output(just a blank page ). Do you have a sample notebook that interfaces with other LLMs that are stored locally?

@wandabwa2004
Copy link

Sorry but how do I run a local example with this syntax:
python3 -m lit_nlp.examples.prompt_debugging.server -- \ --models=gemma:gemma_1.1_instruct_2b_en \ --alsologtostderr

I used something like this for Mistral model. I'm not able to use HF link so have the model in a location:
python3 -m lit_nlp.examples.prompt_debugging.server -- \ --models=mistral:/home/xxx/mistral \ --alsologtostderr

Throws the error "ValueError: Unknown preset identifier. A preset must be ......

How do I set the preset for the path to a local file?

@RyanMullins
Copy link
Member

@wandabwa2004 Hugging Face model identifiers look an awful lot like file paths, which can cause problems. There are unresolved questions on StackOverflow and HF Forums about this and the answer often comes down to playing around with the paths until something works.

Is there any chance you can share a more complete stack trace and the output of ls -l from the directory containing your Mistral weights in a GitHub Gist? It's possible LIT could do more to parameterize our AutoModel initializer calls to assist in this.

@wandabwa2004
Copy link

wandabwa2004 commented Sep 24, 2024

@RyanMullins below is the stack trace and output of ls -l

`(lit_env310) myuserid@xxxx-xxx-server:~/data/genxai/models$ python3 -m lit_nlp.examples.prompt_debugging.server --models=mistral:/home/myuserid/data/models/mistral
2024-09-24 21:21:20.286610: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-24 21:21:20.290856: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2024-09-24 21:21:20.302943: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-24 21:21:20.321394: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-24 21:21:20.327041: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-24 21:21:20.341997: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-24 21:21:21.596789: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-09-24 21:21:26.959503: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
I0924 21:21:26.959838 140427982419776 models.py:81] Loading model 'mistral' from '/home/myuserid/data/models/mistral'
Traceback (most recent call last):
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/server.py", line 174, in
app.run(main)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/server.py", line 155, in main
models=models.get_models(
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/models.py", line 92, in get_models
models |= keras_lms.initialize_model_group_for_salience(
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/keras_lms.py", line 540, in initialize_model_group_for_salience
generation_model = KerasGenerationModel(*args, **kw)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/keras_lms.py", line 206, in init
super().init(*args, **kw)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/lit_nlp/examples/prompt_debugging/keras_lms.py", line 89, in init
self.model = keras_models.CausalLM.from_preset(model_name_or_path)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/keras_nlp/src/models/task.py", line 192, in from_preset
loader = get_preset_loader(preset)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/keras_nlp/src/utils/preset_utils.py", line 635, in get_preset_loader
if not check_file_exists(preset, CONFIG_FILE):
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/keras_nlp/src/utils/preset_utils.py", line 276, in check_file_exists
get_file(preset, path)
File "/home/myuserid/.conda/envs/lit_env310/lib/python3.10/site-packages/keras_nlp/src/utils/preset_utils.py", line 237, in get_file
raise ValueError(
ValueError: Unknown preset identifier. A preset must be a one of:

  1. a built-in preset identifier like 'bert_base_en'
  2. a Kaggle Models handle like 'kaggle://keras/bert/keras/bert_base_en'
  3. a Hugging Face handle like 'hf://username/bert_base_en'
  4. a path to a local preset directory like './bert_base_en
    Use print(cls.presets.keys()) to view all built-in presets for API symbol cls.
    Received: preset='/home/myuserid/data/models/mistral'
    `

Output of ls -l :

-rw-r--r-- 1 myuserid domain users 571 Sep 24 12:20 config.json
-rw-r--r-- 1 myuserid domain users 116 Sep 24 12:20 generation_config.json
-rw-r--r-- 1 myuserid domain users 4997857264 Sep 24 12:39 model-00001-of-00002.safetensors
-rw-r--r-- 1 myuserid domain users 4083429752 Sep 24 12:36 model-00002-of-00002.safetensors
-rw-r--r-- 1 myuserid domain users 6409945088 Sep 24 12:43 pytorch_model-00001-of-00002.bin
-rw-r--r-- 1 myuserid domain users 5064823659 Sep 24 12:38 pytorch_model-00002-of-00002.bin
-rw-r--r-- 1 myuserid domain users 23950 Sep 24 12:20 pytorch_model.bin.index.json
-rw-r--r-- 1 myuserid domain users 414 Sep 24 12:20 special_tokens_map.json
-rw-r--r-- 1 myuserid domain users 2103 Sep 24 12:20 tokenizer_config.json
-rw-r--r-- 1 myuserid domain users 1795188 Sep 24 12:20 tokenizer.json
-rw-r--r-- 1 myuserid domain users 493443 Sep 24 12:20 tokenizer.model.v1

@RyanMullins
Copy link
Member

Thanks for additional info, @wandabwa2004. From the command, error messages, and the ls output, I think this is from trying to load HuggingFace Transformers weights into a KerasNLP model. Try this instead:

python3 -m lit_nlp.examples.prompt_debugging.server \
  --models=mistral:/home/myuserid/data/models/mistral
  --dl_framework=transformers
  --dl_runtime=torch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants