You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does the team have any benchmark on how long models take to compile? I swear it used to be much faster on non-m1 devices but we're seeing compiling on cpuAndNeualEngine take a really really long time recently.
openai_whisper-large-v3-v20240930_turbo took 440s on M4 Pro, 48GB and 560s on M2 Pro, 32GB both on 15.3.1
Subsequent loads are quick, only 3-5 seconds.
Just wondering if its always been like this or this was a regression in the 15.3.1 (non-beta)
Adding some more data as they come up, hope this helps
18.993656992912292s on M2 14.6.1
The text was updated successfully, but these errors were encountered:
That is quite a while, we've also been hearing reports of issues with 15.3.1 here and there, but needs more investigation to figure out whats really going on, but data like this is very helpful 🙏
Does the team have any benchmark on how long models take to compile? I swear it used to be much faster on non-m1 devices but we're seeing compiling on cpuAndNeualEngine take a really really long time recently.
openai_whisper-large-v3-v20240930_turbo
took 440s on M4 Pro, 48GB and 560s on M2 Pro, 32GB both on 15.3.1Subsequent loads are quick, only 3-5 seconds.
Just wondering if its always been like this or this was a regression in the 15.3.1 (non-beta)
Adding some more data as they come up, hope this helps
18.993656992912292s
on M214.6.1
The text was updated successfully, but these errors were encountered: