-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 4 wait 5 sec #40
base: main
Are you sure you want to change the base?
Changes from all commits
5391c92
0af9094
895b760
cda26a8
e762188
f0f251c
177fbce
50427f9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,7 +14,7 @@ | |
filters, | ||
CallbackContext, | ||
) | ||
from handler import text_handler, photo_handler | ||
from handler import photo_handler, store_message, generate_response | ||
from logger import logger | ||
|
||
load_dotenv() | ||
|
@@ -91,8 +91,19 @@ async def error_handler(update: Update, context: CallbackContext) -> None: | |
else: | ||
logger.error(f"Update {update} caused error {context.error}", exc_info=context.error) | ||
|
||
delaying = asyncio.Lock() | ||
|
||
async def delay_then_response(update, con): | ||
async with delaying: | ||
await asyncio.sleep(3) # Wait 3 seconds for new messages | ||
await generate_response(update, con) | ||
|
||
async def text_handler_proxy(update, context): | ||
await text_handler(update, context, con) | ||
_ = context | ||
await store_message(update, con) | ||
|
||
if not delaying.locked(): | ||
_ = asyncio.create_task(delay_then_response(update, con)) # run the delay in background | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remember I mentioned that this feature will be tricky to get right? You seem to have a dangerous race condition here that will cause some messages to be reordered and/or temporarily ignored. Consider the following sequence of events:
Speaking from experience, the likelihood of race conditions increases quickly as you add new states to the program. Here, the culprit is that you added your semaphore. Think of a solution that does not require additional states beside what you already have in the database. Any such solution will be much more robust than what you currently have. One way to approach it is to always start a new task when a message arrives. The task will then wait, and before generating a response it will check if the last message in the chat comes from the LLM; if so, it just exits doing nothing, as it means that another task already took care of the answer. When inserting the LLM-generated response into the database, you have to check that no reordering is happening. There are several ways to do it:
There are other solutions as well. You might notice that what I just outlined uses the database as a sort of queue. It may be beneficial to make the queuing aspect explicit in your design, sending freshly received messages into a delayed queue specific to a given chat, that is read by a worker dedicated to that chat. The messages are then used to generate responses, and the results are pushed into the database only when the corresponding LLM completions are ready. Perhaps it is worth visualizing this on the whiteboard. |
||
|
||
application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, text_handler_proxy)) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Violation of the interface segregation principle: each component in a program needs to have access only to that data which is needs, and no more. Here, replace
update
withchat_id
.An architecturally cleaner solution is to split the function into several smaller ones; we can talk about that later on.