Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log Ceph health and do not fail on HEALTH_WARN #2611

Conversation

fultonj
Copy link
Contributor

@fultonj fultonj commented Dec 17, 2024

The Ceph Upgrade tasks in the cifmw_cephadm role
will fail before the upgrade starts if the health
status is warn or error.

This patch changes it so that the upgrade only fails
if the cluster is in health error.

We have had the job fail in CI but we do not know why.
The task should log the Ceph health before starting the
upgrade so that CI results will give the job owner more
insight into why the job failed.

@fultonj fultonj requested a review from a team as a code owner December 17, 2024 02:57
@github-actions github-actions bot marked this pull request as draft December 17, 2024 02:58
Copy link

Thanks for the PR! ❤️
I'm marking it as a draft, once your happy with it merging and the PR is passing CI, click the "Ready for review" button below.

@fultonj
Copy link
Contributor Author

fultonj commented Dec 17, 2024

@fmount @katarimanojk

The component-compute-edpm-update-rhel9-rhoso18.0-crc-ceph job failed 3 times in a row on HEALTH_WARN but we don't know why. My initial plan was just to get the debug information and I think we're save to merge that part of this patch.

For the second part of this patch I took away failure if health is only in WARN. What do you think of that? If we know the cluster can survive and upgrade when in WARN then I think it makes sense to keep that change, but if we know it can't then I'll take it out. I thought we should at least get a run of the job with both changes to start debugging the problem though.

@fultonj fultonj changed the title Log Ceph health before starting Ceph upgrade Log Ceph health and do not fail on HEALTH_WARN Dec 17, 2024
@fmount
Copy link
Contributor

fmount commented Dec 17, 2024

@fmount @katarimanojk

The component-compute-edpm-update-rhel9-rhoso18.0-crc-ceph job failed 3 times in a row on HEALTH_WARN but we don't know why. My initial plan was just to get the debug information and I think we're save to merge that part of this patch.

For the second part of this patch I took away failure if health is only in WARN. What do you think of that? If we know the cluster can survive and upgrade when in WARN then I think it makes sense to keep that change, but if we know it can't then I'll take it out. I thought we should at least get a run of the job with both changes to start debugging the problem though.

What is weird is that I didn't expect that job to perform a ceph minor update. It was added in unigamma only, where we know everything about the status of Ceph in terms of sizing, while we don't have control on the component-compute-edpm-update job.
I'm wondering if the right choice would be to not perform such update in that job, but I'm also ok to still keep it and proceed even if the status is in HEALTH_WARN. However, as per cephadm code, there might be chances to get another failure when new OSDs containers are rolled out, because of the HEALTH_WARN and potential data movement (scrub, recovery, etc) happening at that time.

Copy link
Contributor

@fmount fmount left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

The Ceph Upgrade tasks in the cifmw_cephadm role
will fail before the upgrade starts if the health
status is warn or error.

This patch changes it so that the upgrade only fails
if the cluster is in health error.

We have had the job fail in CI but we do not know why.
The task should log the Ceph health before starting the
upgrade so that CI results will give the job owner more
insight into why the job failed.

Signed-off-by: John Fulton <fulton@redhat.com>
@fultonj fultonj force-pushed the ceph_upgrade_debug_info branch from 06127dd to 246fa9c Compare December 17, 2024 13:34
@openshift-ci openshift-ci bot removed the lgtm label Dec 17, 2024
@fultonj
Copy link
Contributor Author

fultonj commented Dec 17, 2024

What is weird is that I didn't expect that job to perform a ceph minor update. It was added in unigamma only, where we know everything about the status of Ceph in terms of sizing, while we don't have control on the component-compute-edpm-update job. I'm wondering if the right choice would be to not perform such update in that job, but I'm also ok to still keep it and proceed even if the status is in HEALTH_WARN. However, as per cephadm code, there might be chances to get another failure when new OSDs containers are rolled out, because of the HEALTH_WARN and potential data movement (scrub, recovery, etc) happening at that time.

per our discussion we now know that cifmw_ceph_update: false was not set. Maybe we should default it to false and only override it in Gama? since the job was getting run when we didn't want it to.

@fultonj fultonj marked this pull request as ready for review December 17, 2024 13:43
@fmount
Copy link
Contributor

fmount commented Dec 17, 2024

What is weird is that I didn't expect that job to perform a ceph minor update. It was added in unigamma only, where we know everything about the status of Ceph in terms of sizing, while we don't have control on the component-compute-edpm-update job. I'm wondering if the right choice would be to not perform such update in that job, but I'm also ok to still keep it and proceed even if the status is in HEALTH_WARN. However, as per cephadm code, there might be chances to get another failure when new OSDs containers are rolled out, because of the HEALTH_WARN and potential data movement (scrub, recovery, etc) happening at that time.

per our discussion we now know that cifmw_ceph_update: false was not set. Maybe we should default it to false and only override it in Gama? since the job was getting run when we didn't want it to.

Yeah, it's already false and set to true in the unigamma variables. However, if other jobs reference the same variables, they should provide their own overrides.

Copy link
Contributor

@fmount fmount left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Dec 17, 2024
@fmount
Copy link
Contributor

fmount commented Dec 17, 2024

/approve

@katarimanojk
Copy link
Contributor

/lgtm

@pablintino
Copy link
Collaborator

/approve

Copy link
Contributor

openshift-ci bot commented Dec 17, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fmount, pablintino

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 17a08ad into openstack-k8s-operators:main Dec 17, 2024
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants