Replies: 5 comments 4 replies
-
Hello @dhlolo There are some optimizations that are only performed when no custom tokens are used.
The main thing that could affect the performance in this chase is not automatically using the "starting character optimization" when a custom token is used. See the documentation below how to resolve this: In general even without these optimizations the numbers you posted seems really high.
|
Beta Was this translation helpful? Give feedback.
-
switching to discussion |
Beta Was this translation helpful? Give feedback.
-
I print the called times of 'pattern' callback, it get obviourly slow after 250+ call times in a file. I seriously doubt that there is a memory leak. U can see the code here and maybe find some wxml files to try. |
Beta Was this translation helpful? Give feedback.
-
Hi again @dhlolo This kind of potential issue really does need a more focused and minimal and easy to use reproduction to properly explore, e.g:
Generally it requires far too much time otherwise to investigate, particularly considering this is a free time OSS project... And I took a peek at https://github.com/wxmlfile/wxml-parser/pull/25/files#diff-5389a63321fd22d5c5d63717ea8ea9aa42da16ea3bf04392b36b6d657613213bR115-R123 Now I can't really tell which version of the code is relevant as there are multiple versions commented out in this PR.
Cheers. |
Beta Was this translation helpful? Give feedback.
-
Are you referring to defining a Chevrotain parser using a TextMate grammar? Note the last comment in the above thread |
Beta Was this translation helpful? Give feedback.
-
Using RegExp as token pattern seems to be fast, but when I use custom_payload function:
function matchCustomToken(text, startOffset) { return REG.exec(text.substring(startOffset)); }
.It costs about 20s to solve 500 lines, one and a quarter minutes to solve 1000 lines.
Beta Was this translation helpful? Give feedback.
All reactions