-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: metrics on meterer usage #1212
Conversation
@@ -137,6 +136,7 @@ func (s *DispersalServerV2) validateDispersalRequest( | |||
} | |||
blob := req.GetBlob() | |||
blobSize := len(blob) | |||
s.metrics.reportDisperseBlobSize(blobSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we export this metric only for valid requests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move this so it has the same reasoning as before. Was thinking we should record the size even if the request end up being invalid, or maybe add a metric that count different invalid categories, but probably a different issue
disperser/apiserver/metrics_v2.go
Outdated
@@ -88,6 +89,15 @@ func newAPIServerV2Metrics(registry *prometheus.Registry, metricsConfig disperse | |||
[]string{}, | |||
) | |||
|
|||
disperseBlobMeteredBytes := promauto.With(registry).NewGaugeVec( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we make this a counter so that we can track the increments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dispersalBlobSize
is a gauge and not a counter, so I think this should be the same type in order for these two throughputs to compare. We can sum over the gauge over a range to look at the increments, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dispersalBlobSize
should probably also be a counter. The problem with making it a gauge is that one request will replace the previous value. Instead of this, we want subsequent requests to accumulate over one counter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, make sense, I've updated both to counters!
@@ -122,12 +120,13 @@ func (s *DispersalServerV2) checkPaymentMeter(ctx context.Context, req *pb.Dispe | |||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree that it would be a better idea to return the symbols charges as a return value from MeterRequest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did some minor related refactoring:)
Why are these changes needed?
added a
disperseBlobSymbolsCharged
gauge vector metrics, to be compared against the normal disperseBlobSize (we expectdisperseBlobSymbolsCharged
to round up and be multiple of 128kb for each dispersalChecks