-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proof of concept for a rate limiter #155
base: main
Are you sure you want to change the base?
Conversation
also wondering about the package names, most of the stuff is in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution and great start! I left a few comments but like the overall idea. I'd love to see tests for any new features too as I try and get the code coverage up.
It would also be very cool if we could detect standard/common rate limit headers and enhance our calm (i.e. slow down making requests).
Playing devil's advocate here, would it be better to generate HTTP operations/payloads to some file/script and let another specialized load testing tool take that as input? What additional advantages are there to doing all of the work here?
AddGlobalFlag("rsh-requests", "R", "Do a number of requests", 1, false) | ||
AddGlobalFlag("rsh-load-rate", "l", "Set a load rate in requests per second. Overrides the HTTP cache to false", 1, false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should these default to zero? I have some concerns:
- If non-zero by default they disable caching
- Limiting to 1 request a second is problematic for automatic pagination (which may do many sequential requests) and also see Bulk Resource Management #156 which does many requests.
- Total number of requests vs. some time period (e.g. 30 seconds) - which is better for this?
cli/cli.go
Outdated
if rps, _ := GlobalFlags.GetInt("rsh-load-rate"); rps > 0 { | ||
viper.Set("rsh-load-rate", rps) | ||
// doesn't make sense to flood with requests and obtain a cached response | ||
viper.Set("rsh-no-cache", false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You probably intended to set this to true
since it's setting "no cache".
ticker := time.Tick(time.Second) | ||
var wg = &sync.WaitGroup{} | ||
wg.Add(l.requests) | ||
for ; true; <-ticker { | ||
if executed >= l.requests { | ||
break | ||
} | ||
for i := 0; i < l.rate; i++ { | ||
go func() { | ||
f() | ||
executed++ | ||
wg.Done() | ||
}() | ||
} | ||
} | ||
wg.Wait() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions/concerns:
- What happens if a request takes more than a second? Seems like we'd get many in flight at the same time.
- If a request finishes really fast there is no opportunity to create new ones until the full second has passed, even if a sliding time window would allow a new request.
- This assumes all the requests are the same, but add multiple requests option flag #134 suggests they might get generated for each API operation. How will that work?
7b95364
to
0a99a29
Compare
Proof of concept for #134
only for generic commands for now, let me know what you think. Can be used with
-R 10 -l 2