Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When bullmq was deployed to k8s, Concurrency was out of whack!!! #3075

Open
fifa334 opened this issue Feb 13, 2025 · 7 comments
Open

When bullmq was deployed to k8s, Concurrency was out of whack!!! #3075

fifa334 opened this issue Feb 13, 2025 · 7 comments

Comments

@fifa334
Copy link

fifa334 commented Feb 13, 2025

Hi master,

When I deploy to k8s, create two pods and set Concurrency to 2. Normally, each pod can process two jobs concurrently, but in practice each pod can process four jobs concurrently, and it was found that the set number of Concurrency times the number of pods, which affects Concurrency,

Could you give me some idea? please.

@roggervalf
Copy link
Collaborator

hi @fifa334,
to understand your case, do you have 2 workers in each pod with concurrency 2?
we have 2 ways to set concurrency, there is a local concurrency https://docs.bullmq.io/guide/workers/concurrency#local-concurrency-factor how many jobs are allowed to be processed in parallel for that instance and we also have a global concurrency option https://docs.bullmq.io/guide/queues/global-concurrency determines how many jobs are allowed to be processed in parallel across all your worker instances

@fifa334
Copy link
Author

fifa334 commented Feb 13, 2025

hi @roggervalf ,

  1. In each pod , I have 2 queue that have workers on them .

e.g: I have queueA and queueB that have workers on them, then I set workerA concurrency to 1 and workerB concurrency to 3.
However, when I add jobs to queueA in succession, queueA appears to be synchronizing two jobs.

  1. I have tried to use global-concurrency, but it didn't live up to my expectations.

I set the global-concurrency to 1 ,
When I added four jobs to queueA in a row, I found that queueA would only consume one job, and other jobs would not automatically consume after job1 finished consuming

Image

Image

Could you give me some idea? please.

@roggervalf
Copy link
Collaborator

hey @fifa334,
For case 1 we might need to have a test case to see that behavior, as I can tell local concurrency should work, but if you see 2 jobs in active state, could you verify that there are no more workers than just 1 for queueA, you can use this method https://api.docs.bullmq.io/classes/v5.Queue.html#getWorkersCount?
For case 2, it's working as intended, global concurrency value is how many jobs can be executed at the same time cross worker instances. If you set it as 1, only 1 job will be processed at any time, after it is finished, next job will be moved to active. What are your expectations in this case?

@fifa334
Copy link
Author

fifa334 commented Feb 13, 2025

Hi @roggervalf ,
Thanks for your reply.
For case 1, I need some time to test.

For case 2, When I set global concurrency to 1 on QueueA, the four consecutive jobs added should be active one after the other.
Actually, it's not what I expected above. They just did the first job.

Image

I added four jobs in a row. Normally, Is the first job in active state and the other three in wait?

@manast
Copy link
Contributor

manast commented Feb 13, 2025

@fifa334 could you please give us a code snippet that reproduces your issue?

@fifa334
Copy link
Author

fifa334 commented Feb 14, 2025

Hi @manast and @roggervalf ,
I finally know what my problem is.

  • Since I have two Pods open in k8s, a worker is automatically created for each queue when the application starts, so my QueueA has two workers.

  • Did worker hv some helpful API for this case ?

  • Can two Pods share the same worker? e.g use by redis ?

Could you give me some idea? please.

@manast
Copy link
Contributor

manast commented Feb 14, 2025

@fifa334 I think you have some misconceptions on how workers and concurrency work in BullMQ, so your questions are not really making a lot of sense :)

To summarise. 1 Worker instance (new Worker(...)) can process as many jobs concurrently as the concurrency you use for that worker. You can have as many worker instances as you want, every worker instance will process as many jobs at the same time as its concurrency setting. This is covered here: https://docs.bullmq.io/guide/workers/concurrency

There is a special option though, called "global concurrency", where you can specify the max amount of jobs processed concurrently by ALL your workers. So if you set this value to 2 for example, then independently on how many workers and what concurrency setting you use, the workers will only process 2 jobs at the same time. https://docs.bullmq.io/guide/queues/global-concurrency

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants