You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have to created sink connector with configuration like this
{
"connector.class": "io.aiven.kafka.connect.http.HttpSinkConnector",
"http.authorization.type": "none",
"tasks.max": "3",
"name": "{{connector-name}}",
"http.url": "{{service-endpoint-url}}",
"auto.commit.interval.ms": "15000",
"heartbeat.interval.ms": "15000",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"retry.backoff.ms": "30000",
"http.ssl.trust.all.certs": "true",
"topics.regex": "{{topic-name}}",
"max.poll.interval.ms": "3600000"
}
I reviewed the service response log and observed that the duration ranged between 5 and 10 seconds, then I counted the number of requests and I found the number of requests was more than the number of messages. Assume that if I have 500 messages in Kafka, the number of requests is more than 500 requests and I find some messages will be sent duplicate.
Which config will fix this issue ?
Thank you
The text was updated successfully, but these errors were encountered:
NM-Narut
changed the title
Http Sink Connector retry resume messages when the service has a long-time response
Http Sink Connector retry messages when the service has a long-time response
Oct 10, 2024
I have to created sink connector with configuration like this
{
"connector.class": "io.aiven.kafka.connect.http.HttpSinkConnector",
"http.authorization.type": "none",
"tasks.max": "3",
"name": "{{connector-name}}",
"http.url": "{{service-endpoint-url}}",
"auto.commit.interval.ms": "15000",
"heartbeat.interval.ms": "15000",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"retry.backoff.ms": "30000",
"http.ssl.trust.all.certs": "true",
"topics.regex": "{{topic-name}}",
"max.poll.interval.ms": "3600000"
}
I reviewed the service response log and observed that the duration ranged between 5 and 10 seconds, then I counted the number of requests and I found the number of requests was more than the number of messages. Assume that if I have 500 messages in Kafka, the number of requests is more than 500 requests and I find some messages will be sent duplicate.
Which config will fix this issue ?
Thank you
The text was updated successfully, but these errors were encountered: