Changing livenessProbe configuration #14
Merged
+6
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
I also took the opportunity to change the
env.example
to.env.example
Bug fixing
AWS runs were always failing when running with daemon mode on K8s.
Why
The root cause is still unknown, but what's happening is quite clear. The default values for the
livenessProbe
are 10 seconds for the period and 1 second for the timeout. Notice the below screenshot:During the AWS run, because the calls to their API take long, it looks like the resource occupancy makes it so that the response to the probe takes longer than 1 second - in the above screenshot, a probe was sent at 17:13:44, meaning the next probe log should have happened at 17:13:55 tops, but only happens at 17:14:00. Most likely due to the effort of these multiple long calls, the probe is taking more than a second to get a reply and decides to timeout, abruptly ending the run.
Moreover, following kube-score advice regarding the
livenessProbe
:And thus I also took the liberty of moving it to a separate
/live
endpoint.Next steps
Depending on how long these calls take, there may be the need to turn this into an environment variable. Any thoughts on that?