Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding custom proto serdes #100

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Adding custom proto serdes #100

wants to merge 7 commits into from

Conversation

singh-viikram
Copy link

Description

Adding custom proto serdes to avoid schema validations for proto based events as both producer and consumers are
anyways using the same protos to read the message we dont need the schema validation.

Testing

Please describe the tests that you ran to verify your changes. Please summarize what did you test and what needs to be tested e.g. deployed and tested helm chart locally.

Checklist:

  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • Any dependent changes have been merged and published in downstream modules

Documentation

Make sure that you have documented corresponding changes in this repository or hypertrace docs repo if required.

@singh-viikram singh-viikram requested a review from a team as a code owner September 12, 2024 10:16
Copy link

github-actions bot commented Sep 12, 2024

Test Results

16 files  +1  16 suites  +1   29s ⏱️ -1s
68 tests +1  68 ✅ +1  0 💤 ±0  0 ❌ ±0 
86 runs  +1  86 ✅ +1  0 💤 ±0  0 ❌ ±0 

Results for commit 2a40d95. ± Comparison against base commit 8f5da9e.

This pull request removes 5 and adds 6 tests. Note that renamed tests count towards both.
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [1] {rocksdb.compaction.style=LEVEL, rocksdb.max.write.buffers=2, rocksdb.direct.reads.enabled=true, rocksdb.write.buffer.size=8388608, rocksdb.block.size=8388608, rocksdb.compression.type=SNAPPY_COMPRESSION, rocksdb.log.level=INFO_LEVEL, application.id=app-1}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [2] {rocksdb.compaction.style=UNIVERSAL, rocksdb.max.write.buffers=3, rocksdb.direct.reads.enabled=true, rocksdb.write.buffer.size=8388607, rocksdb.block.size=8388609, rocksdb.compression.type=SNAPPY_COMPRESSION, rocksdb.log.level=DEBUG_LEVEL, application.id=app-2}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [3] {application.id=app-3, rocksdb.cache.high.priority.pool.ratio=-0.1}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [3] {rocksdb.compaction.style=FIFO, rocksdb.max.write.buffers=4, rocksdb.direct.reads.enabled=false, rocksdb.write.buffer.size=8388609, rocksdb.block.size=8388607, rocksdb.compression.type=SNAPPY_COMPRESSION, rocksdb.log.level=ERROR_LEVEL, application.id=app-3}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [4] {application.id=app-2, rocksdb.cache.high.priority.pool.ratio=1.1}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [1] {rocksdb.log.level=INFO_LEVEL, application.id=app-1, rocksdb.compaction.style=LEVEL, rocksdb.max.write.buffers=2, rocksdb.direct.reads.enabled=true, rocksdb.write.buffer.size=8388608, rocksdb.block.size=8388608, rocksdb.compression.type=SNAPPY_COMPRESSION}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [2] {rocksdb.log.level=DEBUG_LEVEL, application.id=app-2, rocksdb.compaction.style=UNIVERSAL, rocksdb.max.write.buffers=3, rocksdb.direct.reads.enabled=true, rocksdb.write.buffer.size=8388607, rocksdb.block.size=8388609, rocksdb.compression.type=SNAPPY_COMPRESSION}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [3] {rocksdb.cache.high.priority.pool.ratio=-0.1, application.id=app-3}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [3] {rocksdb.log.level=ERROR_LEVEL, application.id=app-3, rocksdb.compaction.style=FIFO, rocksdb.max.write.buffers=4, rocksdb.direct.reads.enabled=false, rocksdb.write.buffer.size=8388609, rocksdb.block.size=8388607, rocksdb.compression.type=SNAPPY_COMPRESSION}
org.hypertrace.core.kafkastreams.framework.rocksdb.BoundedMemoryConfigSetterTest ‑ [4] {rocksdb.cache.high.priority.pool.ratio=1.1, application.id=app-2}
org.hypertrace.core.kafkastreams.framework.serdes.ProtoSerdeTest ‑ testSerialize()

♻️ This comment has been updated with latest results.

Copy link
Contributor

@laxmanchekka laxmanchekka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Please add a unit similar to Avro SerDe test. Helps to understand how this is used.
  • Fix the builds. Some PR builds are failing
  • Fix the existing comments

suresh-prakash
suresh-prakash previously approved these changes Sep 13, 2024
Copy link

@suresh-prakash suresh-prakash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Please get it reviewed by @laxmanchekka as well.

Copy link

@kishansairam9 kishansairam9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please get a signoff from @avinashkolluru as well. we have quite a few places where it is integrated with schema registry and we are doing it for new code as well.

if we are able to set the default compat level for proto at schema registry to be backward_full then we don't need to add for each proto another hook to update compat. last I heard srikar checked and couldn't find such a flag, may be worth starting a discussion with community.

though schema registry is not required if we take care of proto evolution rightly, if for some reason in some repo we miss that check enablement by CI, then the code will go into dev and produce incompatible messages which has to be restored manually by moving offsets. if schema registry based proto is introduced then it won't produce and throw a runtime check. I'd prefer to have proto via schema registry if we can set default compat level for proto differently from avro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants