-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behaviour of coap_add_data_large_request() compared to coap_add_data_large_response() #1579
Comments
Thanks for raising and providing the detailed background. In general, there is a lot overlap between
Note that RFC7641 came out before CoAP FETCH was introduced (RFC8132). RFC8132 does not expand on what to do with FETCH data when de-registering an Observe. Certainly the server has to maintain the initial Observe request along with the data for when an Observe unsolicited response needs to be triggered. Note that using GET instead of FETCH, still with large data (should not really be happening with a GET), and setting up an Observe, libcoap also includes the data when doing a de-register (and the libcoap server expects the data to be there if it was used when registering an observe). In the case of a client setting up an Observe, the size of data that needs to be stored away can be bigger than that of a PDU. There is not much that can be done about this. However, the whole amount of data does not need to be saved if this is not a request setting up an Observe (which is your PUT scenario).
is a potential fix for what you are seeing. |
Many thanks for your prompt and informative response. I will try this and update below. Our use case for this is related to occasional sending of configuration data, so this particular case is not often used, but has now been added to our regression tests. |
Okay - this fix does indeed work in our use case. Thank you. Will it be going into the main code, or should I maintain it as a local patch? Thanks. |
Thanks for the confirmation. I have a better, alternative more generic fix in submitted PR #1580 which will end up in the main code. |
@ChrisCowen Can this be closed not as the fix is in the develop branch? |
Yes - thankyou. |
My query relates to sending of large data requests, which seem to behave slightly differently from large responses.
We are making use of libcoap's fragmentation and re-assembly for large packets.
Recently, I tried moving from heap allocation to buffer pools.
GET requests work fine, using coap_add_data_large_response() in our server code.
In our client code, a large PUT is failing the memory pool allocation for type COAP_PDU_BUF in coap_block_new_lg_crcv(), specifically because this line:
size_t data_len = lg_xmit ? lg_xmit->length :
pdu->data ?
pdu->used_size - (pdu->data - pdu->token) : 0;
means the allocation is for the whole multi-block packet. As a workaround I can switch back to heap allocation for this buffer type.
So my question is - should this handle the fragmentation without needing to request a COAP_PDU_BUF large enough to hold the entire (unfragmented) frame? I was expecting the code to allocate single block sized PDUs and fill hem from the larger buffer passed into coap_add_data_large_request(), as it does in the response side.
In the opposite direction, server code will also pass in a buffer containing the whole packet, but by the time the PDU BUF allocations take place, a size for one block seems to already have been worked out, so it does work with buffer pools. So the behaviour is slightly different for large responses, and it was my expectation that it would behave the same way for large requests.
The text was updated successfully, but these errors were encountered: