You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Darwin Lucass-MacBook-Pro.local 23.5.0 Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 arm64
Subsystem
worker
What steps will reproduce the bug?
I'm still working on creating a minimal reproduction for this bug, but it can be seen reported on this Nuxt issue: nuxt/nuxt#23832
How often does it reproduce? Is there a required condition?
It is consistent, whenever the server tries to start a new worker and terminate the old one I see the segmentation fault occurring.
What is the expected behavior? Why is that the expected behavior?
The expected behavior is to not have any segmentation fault at all, and for the program to successfully terminate the previous worker and initiate the new one without issues. It is the expected behavior because the program shouldn't crash and is not violating any of the Node APIs.
What do you see instead?
I successfully extracted a core file using a Debug build of current main, and can see the following in lldb when running bt all:
You can see that thread number 18 (frame 3) stopped the program because of an assertion in V8 while creating the Isolate: DisallowGarbageCollection no_gc;, while thread number 15 was waiting for the previous worker to terminate.
Additional information
I'm suspicious there is an issue when creating workers while waiting for a previous worker to terminate (e.g. does the terminate process use garbage collection? if so, that could be the cause the worker starting on thread 18 fails the program since V8 doesn't expect it to be enabled), but I don't have the knowledge if this is something expected or not. If it isn't, then there is a bug somewhere that is allowing this to happen, and is one of the reason that creating a small reproduction is being very hard.
The text was updated successfully, but these errors were encountered:
while I'm not able to provide the reproduction at this time, if any of you can give me some pointers on what to look out for when debugging this issue would help a lot. I can try to find the bug and submit a fix PR if necessary, but need some help with it.
while I'm not able to provide the reproduction at this time, if any of you can give me some pointers on what to look out for when debugging this issue would help a lot. I can try to find the bug and submit a fix PR if necessary, but need some help with it.
Nice...
You could build on debug mode and way may get a richer backtrace. Also attach a debugger where Node is crashing.
Version
v20.17.0
Platform
Subsystem
worker
What steps will reproduce the bug?
I'm still working on creating a minimal reproduction for this bug, but it can be seen reported on this Nuxt issue: nuxt/nuxt#23832
How often does it reproduce? Is there a required condition?
It is consistent, whenever the server tries to start a new worker and terminate the old one I see the segmentation fault occurring.
What is the expected behavior? Why is that the expected behavior?
The expected behavior is to not have any segmentation fault at all, and for the program to successfully terminate the previous worker and initiate the new one without issues. It is the expected behavior because the program shouldn't crash and is not violating any of the Node APIs.
What do you see instead?
I successfully extracted a core file using a Debug build of current main, and can see the following in
lldb
when runningbt all
:Backtrace of failed program
You can see that thread number 18 (frame 3) stopped the program because of an assertion in V8 while creating the Isolate:
DisallowGarbageCollection no_gc;
, while thread number 15 was waiting for the previous worker to terminate.Additional information
I'm suspicious there is an issue when creating workers while waiting for a previous worker to terminate (e.g. does the terminate process use garbage collection? if so, that could be the cause the worker starting on thread 18 fails the program since V8 doesn't expect it to be enabled), but I don't have the knowledge if this is something expected or not. If it isn't, then there is a bug somewhere that is allowing this to happen, and is one of the reason that creating a small reproduction is being very hard.
The text was updated successfully, but these errors were encountered: