Replies: 1 comment 1 reply
-
Can you provide a test that reproduces this behaviour? if indeed the concurrency would be the double of what you have setup, it should be relatively easy to demonstrate in a test. Also have you called |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Scenario
I am running BullMQ (via a NestJS app) on six ECS instances on AWS.
The queue has a concurrency of two, with no limiter in place. I expect the concurrency to be somewhere around 6x2 (12).
In practice the concurrency is always at
24
.This is just one example of the problem, I experience this issue with other queues as well. I haven't seen this issue in the past, and I have been a user of BullMQ for many years now. I am thinking maybe an upgrade of one of the dependencies could be the cause, but it's just a theory I don't have any leads. I have been experiencing this issue for at least a few weeks now.
What have I tried?
I tried calling
await queue.setGlobalConcurrency(x);
for each queue where I experienced this issue. However, this did not reduce the concurrency. For example if I set the concurrency in the example above to12
globally, the concurrency remains at24
, even after pausing and resuming all jobs in the queue.I reviewed my application code carefully for any possible duplicate registration of queues or flows, or duplicate module imports that might create multiple processor instances, I have not found any misconfiguration.
I confirmed the number of ECS instances is six, and that no auto scaling is occurring at the time that I observe this issue.
I plan to divide my queue concurrency by 2 in hopes that it remedies the issue for now, but any recommendations would be appreciated.
Versions
NodeJS 22 LTS
@nestjs/bullmq 11.0.0
bullmq 5.34.2
Beta Was this translation helpful? Give feedback.
All reactions