-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with Quantizing EEGNet Model on ESP32-S3 (AIV-744) #194
Comments
We will download the code to analyze the issue. If it's convenient for you to upload the ONNX model file, it will accelerate our debugging process. |
Hi @BlueSkyB, Yes. I have uploaded my trained model in pytorch (*.pt) and ONNX model that I got from quantize script in zip file |
Is it feasible to operate a small model, like the one I have, on an ESP32S3 module without a PSRAM component? because the ESP32-S3-WROOM-1-N16 I received does not have a PSRAM option. |
You can disable the configuration of CONFIG_SPIRAM by using idf.py menuconfig. If there is no PSRAM, the system will default to using internal RAM. In this case, you must ensure that the model is small enough so that the memory required for its parameters and feature maps is less than the available system memory; otherwise, it will not function properly. Alternatively, you can refer to the document https://docs.espressif.com/projects/esp-dl/en/latest/tutorials/how_to_load_model.html to disable param_copy, but this will significantly reduce performance. Additionally, your model contains the Elu operator, which is currently not supported by ESP-DL. You can check the supported operators at https://github.com/espressif/esp-dl/blob/master/operator_support_state.md. We will support it in the future. |
Hi @BlueSkyB, thank you so much for your information. I have modified the model to make it simpler for testing, and it successfully passes the quantization script. I imported the eegnet.espdl (18 KB) in the same way as demonstrated in the mobilenet_v2 example, but I encountered the following issue:
Updated (250124): This warning has been fixed once I switched to my ESP32S3 EVK with PSRAM support. The PSRAM effect appears to call the heap within. However, the Error is still visible. Error:
Do you know how to resolve the warning and the error?
And here is the log
It seems something miss in my code because 176 = 352/2 For reference, I have included the log from the monitor below.
And here is the code in app_main function
My ONNX model (change Relu to PRelu) |
Hi @manhdatbn93 . We have fixed the bug and the model is now running normally. Please update esp-ppq and esp-dl as well. |
Hi @BlueSkyB, Thank you very much for the update. I just wanted to clarify your comment, "the model is now running normally." Does this mean the model is successfully running on the ESP32S3 series chip, or are you referring to the quantization script working correctly for this model? |
Hi @BlueSkyB, After updating, the quantization script is working, and I successfully obtained the model.espdl. However, when embedding this model into the ESP32S3-DevKitC-1 v1.1, I encountered an issue with an assertion failure in the reshape module. Here is the error log
I added some code to print the input shape in the get_output_shape() function of Reshape module (dl_module_reshape.hpp) as below:
Below is the log after adding codes.
|
I suspect there is something missing in my model, but I am not sure what it is. To debug, I reverted to the example provided in the repository. In the \esp-dl\tools\quantization\quantize_torch_model.py script, I changed the TARGET to the ESP32S3 device while keeping the rest of the code unchanged, then ran the script. However, I found that the mobilenet_v2.info file generated by the script is different from the example file provided in the repository (\esp-dl\examples\mobilenet_v2\models\esp32s3\mobilenet_v2.info). I used the new mobilenet_v2.espdl file generated from my script to flash the ESP32S3 with PRSAM support, but it has not been successful. Could you please help clarify: Why is the mobilenet_v2.info file generated by the script different from the example file in the repository? Below is the log from the example code
Here is the log after changing the mobilenet_v2.espdl (called mobilenet_v2_new.espdl)
For the reference, I have uploaded these files are generated from script in esp-dl\models\torch.. |
The ONNX file you uploaded for the first time previously, after being quantized by ESP-PPQ, can be loaded and run normally on the ESP32-S3. |
Hi @BlueSkyB , Thank you so much for your time to debug this issue. Could you let me know the version of your libraries—such as esp-ppq, torch, torchvision, and Python—you are using? Because the mobilenet_v2 example still provides an abnormal value that differs from the mobilenet_v2 even after I create a new environment and install the lib in the quantization folder. |
I have installed quite a few packages in my environment. I will list the packages directly related to esp-ppq first: I'm a bit suspicious that it might be due to the latest flatbuffers being incompatible with the code generated by the older version. Try downgrading the flatbuffers version and see if it can return to normal. |
Hi @BlueSkyB , It looks strange! I have updated all libraries to match your version, but I am still facing problems with the quantization tools. The output of mobilenet_v2.info shows abnormal values. What is the "PyTorch 1.13.0" library? I am using a Windows environment and I think it is torch library. |
Yes, your phenomenon is quite strange. |
Hi @BlueSkyB , I am using Python version 3.10.11 Do you have any suggestions for me to address this issue? |
Remaining points of suspicion:
|
Checklist
Issue or Suggestion Description
Hi Team, I am currently working on quantizing the EEGNet model for deployment on my custom board based on the ESP32-S3. My goal is to use this model for a simple classification task: detecting eye blink and eye non-blink from EEG data.
Here is the EEGNet model I am using: https://github.com/YuDongPan/DL_Classifier/blob/main/Model/EEGNet.py
Issue
When I attempt to quantize this model using ESPDL, I encounter an error in the avg_pool2d function. The error details are as follows:
My batch size is
Here is the function for quantizing
I would appreciate it if you could help identify the root cause of this issue and provide guidance on:
[batch_size, channels, samples].
Thank you for your support. Please let me know if you need additional details about the setup or the error logs.
Additional Details
Data Input Shape: [batch_size, 1, 8, 500]
Target Board: ESP32-S3
Quantization Tool: ESPDL
Framework: PyTorch
Full modified code, other codes are same as example:
Attached is the model I got as *.onnx although the run failed.
![Image](https://private-user-images.githubusercontent.com/52445433/405093090-17279235-1afc-4a02-9d73-a0a8cb91ed42.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkzMDY0ODAsIm5iZiI6MTczOTMwNjE4MCwicGF0aCI6Ii81MjQ0NTQzMy80MDUwOTMwOTAtMTcyNzkyMzUtMWFmYy00YTAyLTlkNzMtYTBhOGNiOTFlZDQyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDIwMzYyMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTkzMjc4ODBkMmY1NDQ4ODcxYmY3NTQwYmYzYzBiNDk0YWI5ZjVlNmJlNDNhZDVlN2EwOTY0Y2FiMDczNzgxNjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.NS1af2X-cbQ0uAWqLNUlEN850Et7235c0tIXrC_cWD8)
The text was updated successfully, but these errors were encountered: