Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid admin roles in local cluster runner #2026

Merged
merged 3 commits into from
Oct 7, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ async fn main() {
let nodes = Node::new_test_nodes_with_metadata(
base_config,
BinarySource::CargoTest,
enum_set!(Role::Admin | Role::Worker),
enum_set!(Role::Worker),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should have something in the cluster builder to specify which node is admin?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently my thinking is that its easier to have one singleton node that has all the singleton roles (metadata, admin), and then the other N nodes can all be more similar. its always possible to specify nodes in whatever setup you like, but the goal of new_test_nodes_with_metadata is to create a list of nodes with some sensibl defaults

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, but i can see now that non-bootstrap nodes will fail to start if there is no nodes config yet, which is a slightly unpleasant race condition. i wonder if indeed the cluster construct needs to know about who is admin and make sure its started and healthy before moving on

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a slightly longer timeout help in preventing this situation? I could see someone who is deploying Restate might run into the same situation that some nodes start a bit earlier than others creating a race condition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only if your environment can restart the process. currently the non bootstrap nodes will shut down if they reach the metadata service and find no nodes config. this is fine with systemd or kubernetes which will restart on failure. but my local cluster runner does not do this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could not immediately fail but wait a bit in order to mitigate the race condition at start-up. Alternatively, all nodes could be started with the bootstrap option, assuming they have identical configuration.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my current thinking for the runner is to wait for admins to be ready on port 9070 before progressing to other nodes. but i agree, it would be good if non admin nodes would wait a bit instead of bailing

2,
);

Expand Down
18 changes: 13 additions & 5 deletions crates/local-cluster-runner/src/cluster/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,8 @@ fn default_cluster_name() -> String {
pub enum ClusterStartError {
#[error("Failed to start node {0}: {1}")]
NodeStartError(usize, NodeStartError),
#[error("Admin node is not healthy after waiting 60 seconds")]
AdminUnhealthy,
#[error("Failed to create cluster base directory: {0}")]
CreateDirectory(io::Error),
#[error("Failed to create metadata client: {0}")]
Expand Down Expand Up @@ -86,11 +88,17 @@ impl Cluster {
);

for (i, node) in nodes.into_iter().enumerate() {
started_nodes.push(
node.start_clustered(base_dir.as_path(), &cluster_name)
.await
.map_err(|err| ClusterStartError::NodeStartError(i, err))?,
)
let node = node
.start_clustered(base_dir.as_path(), &cluster_name)
.await
.map_err(|err| ClusterStartError::NodeStartError(i, err))?;
if node.admin_address().is_some() {
// admin nodes are needed for later nodes to bootstrap. we should wait until they are serving
if !node.wait_admin_healthy(Duration::from_secs(30)).await {
return Err(ClusterStartError::AdminUnhealthy);
}
}
started_nodes.push(node)
}

Ok(StartedCluster {
Expand Down
10 changes: 7 additions & 3 deletions crates/local-cluster-runner/src/node/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ impl Node {
}

// Creates a group of Nodes with a single metadata node "metadata-node", and a given number
// of other nodes ["node-1", ..] each with the provided roles. Node name, roles,
// of other nodes ["node-1", ..] each with the provided roles. Node name, roles,
// bind/advertise addresses, and the metadata address from the base_config will all be overwritten.
jackkleeman marked this conversation as resolved.
Show resolved Hide resolved
pub fn new_test_nodes_with_metadata(
base_config: Configuration,
Expand All @@ -148,18 +148,22 @@ impl Node {
let mut nodes = Vec::with_capacity((size + 1) as usize);

{
let mut base_config = base_config.clone();
base_config.common.allow_bootstrap = true;
nodes.push(Self::new_test_node(
"metadata-node",
base_config.clone(),
base_config,
binary_source.clone(),
enum_set!(Role::Admin | Role::MetadataStore),
));
}

for node in 1..=size {
let mut base_config = base_config.clone();
base_config.common.allow_bootstrap = false;
nodes.push(Self::new_test_node(
format!("node-{node}"),
base_config.clone(),
base_config,
binary_source.clone(),
roles,
));
Expand Down
4 changes: 2 additions & 2 deletions server/tests/cluster.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ async fn node_id_mismatch() {
let nodes = Node::new_test_nodes_with_metadata(
base_config.clone(),
BinarySource::CargoTest,
enum_set!(Role::Admin | Role::Worker),
enum_set!(Role::Worker),
1,
);

Expand Down Expand Up @@ -64,7 +64,7 @@ async fn cluster_name_mismatch() {
let nodes = Node::new_test_nodes_with_metadata(
base_config.clone(),
BinarySource::CargoTest,
enum_set!(Role::Admin | Role::Worker),
enum_set!(Role::Worker),
1,
);

Expand Down
Loading