-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8SPSMDB-1205: Allow backups in unmanaged clusters #1715
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
egegunes
force-pushed
the
K8SPSMDB-1205
branch
from
November 12, 2024 11:39
099c6d0
to
9941eee
Compare
egegunes
requested review from
tplavcic,
nmarukovich,
ptankov,
jvpasinatto,
eleo007,
hors,
inelpandzic and
pooknull
as code owners
November 12, 2024 15:18
inelpandzic
approved these changes
Nov 12, 2024
commit: 6bc38f5 |
hors
approved these changes
Nov 13, 2024
Comment on lines
17
to
35
local endpoint="$1" | ||
local rsName="$2" | ||
local nodes_amount=0 | ||
until [[ ${nodes_amount} == 6 ]]; do | ||
nodes_amount=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \ | ||
local target_count=$3 | ||
|
||
local nodes_count=0 | ||
until [[ ${nodes_count} == ${target_count} ]]; do | ||
nodes_count=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \ | ||
| egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:|bye' \ | ||
| $sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/') | ||
|
||
echo "waiting for all members to be configured in ${rsName}" | ||
echo -n "waiting for all members to be configured in ${rsName}" | ||
let retry+=1 | ||
if [ $retry -ge 15 ]; then | ||
echo "Max retry count $retry reached. something went wrong with mongo cluster. Config for endpoint $endpoint has $nodes_amount but expected 6." | ||
echo "Max retry count ${retry} reached. something went wrong with mongo cluster. Config for endpoint ${endpoint} has ${nodes_count} but expected ${target_count}." | ||
exit 1 | ||
fi | ||
echo -n . | ||
echo . | ||
sleep 10 | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[shfmt] reported by reviewdog 🐶
Suggested change
local endpoint="$1" | |
local rsName="$2" | |
local nodes_amount=0 | |
until [[ ${nodes_amount} == 6 ]]; do | |
nodes_amount=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \ | |
local target_count=$3 | |
local nodes_count=0 | |
until [[ ${nodes_count} == ${target_count} ]]; do | |
nodes_count=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \ | |
| egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:|bye' \ | |
| $sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/') | |
echo "waiting for all members to be configured in ${rsName}" | |
echo -n "waiting for all members to be configured in ${rsName}" | |
let retry+=1 | |
if [ $retry -ge 15 ]; then | |
echo "Max retry count $retry reached. something went wrong with mongo cluster. Config for endpoint $endpoint has $nodes_amount but expected 6." | |
echo "Max retry count ${retry} reached. something went wrong with mongo cluster. Config for endpoint ${endpoint} has ${nodes_count} but expected ${target_count}." | |
exit 1 | |
fi | |
echo -n . | |
echo . | |
sleep 10 | |
done | |
local endpoint="$1" | |
local rsName="$2" | |
local target_count=$3 | |
local nodes_count=0 | |
until [[ ${nodes_count} == ${target_count} ]]; do | |
nodes_count=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \ | |
| egrep -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:|bye' \ | |
| $sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/') | |
echo -n "waiting for all members to be configured in ${rsName}" | |
let retry+=1 | |
if [ $retry -ge 15 ]; then | |
echo "Max retry count ${retry} reached. something went wrong with mongo cluster. Config for endpoint ${endpoint} has ${nodes_count} but expected ${target_count}." | |
exit 1 | |
fi | |
echo . | |
sleep 10 | |
done |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
CHANGE DESCRIPTION
Problem:
We don't allow users to run backups in unmanaged clusters.
Cause:
We deliberately added this limitation to prevent confusing users. If you run a backup in an unmanaged (and secondary) cluster, backup object will be created in unmanaged cluster but you won't be able to restore it in the same cluster.
Solution:
We're removing the limitation. Technically, we don't need to do much thanks to distributed architecture of PBM. Users can now run backups either in managed or unmanaged clusters. pbm-agent instances will select a node to take backup from among themselves.
Warning
There are some caveats which might be confusing:
oplogSpanMin
to2
in a secondary cluster, this will be applied to primary cluster too.CHECKLIST
Jira
Needs Doc
) and QA (Needs QA
)?Tests
compare/*-oc.yml
)?Config/Logging/Testability