-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
expand pg_net functionality with more operations and other data types #77
Comments
Lack of |
Was thinking the same |
DELETE is now supported, see #63 (comment).
|
That's awesome 👍 Do you know if it's possible to upgrade without downtime using supabase on cloud? 🤞 https://discord.com/channels/839993398554656828/1078585162943705099/1078599101769339010 |
Maybe try with: ALTER EXTENSION pg_net UPDATE 0.7 If that doesn't work, then you'd need to do a pause/restore. |
I just get this output: |
Works wonderfully, but the README needs to be updated because it doesn't mention that DELETE is now supported 👍🏻 |
I'm going to add a request for DELETE to support a BODY. Supabase storage-api has a delete that uses the body to pass pathnames to delete from a bucket. I have a cron task that deletes files. It probably does not matter that it is much slower using http than pg_net, but there is no need for it to wait for the status of the delete operation (it is not even clear storage-api errors in the bulk delete case). Using http I can group the files by bucket and do a single http request thus saving many http requests and storage-api overhead for each file. But this takes about 300msec for 10 files in one bucket. The problem is if I have 10 buckets with 1 file each it takes 2 seconds and pg_net with single delete would be the clear winner in the second case. With pg_net DELETE the cron task takes .03 seconds either way, as I don't care about the status. So the dilemma is a case where there are 100 files to delete in a few buckets. http bulk delete will take longer, but will drastically reduce the number of storage-api calls and http traffic. pg_net will be done very quickly in the cron task but have generated 100 http requests and storage-api calls. I could use both extensions and make a decision based on the group by sizes but seems like over kill. |
Looked pretty easy to add it...
But no... |
Feature request
expand pg_net functionality with more operations and other data types
Is your feature request related to a problem? Please describe.
Yes. I would like to use
pg_net
to access REST micro-services in an asynchronous manner, when those services rely on other HTTP methods besides justGET
, such asPUT
,PATCH
, andDELETE
. Moreover, sometimes those services work with payloads that are not JSON and therefore cannot be passed to a PostgreSQL function as ajson
orjsonb
data type.Describe the solution you'd like
In addition to the existing
net.http_get(url text, params jsonb, headers jsonb, timeout_milliseconds int)
andnet.http_post(url text, params jsonb, headers jsonb, timeout_milliseconds int)
functions, I would like for there to be a master functionnet.http(request http_request, timeout_milliseconds int)
function similar to thehttp.http(request http_request)
function in the psql-http extension. Like in that extension,http_request
would be a data type that has both amethod
and acontent
attribute, the latter beingvarchar
. This would be enough to support other HTTP methods and other payloads.Describe alternatives you've considered
I have considered and even used the synchronous
http
extension in conjunction with custom tables and thepg_cron
extension to (re)implement a pseudo-async processor, but it's cumbersome and duplicative of the work that's in thepg_net
extension.Additional context
No other context is relevant.
The text was updated successfully, but these errors were encountered: