Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(performance):tweaking of thresholds and improvements on performance smoketest #1879

Merged
merged 6 commits into from
Feb 14, 2025

Conversation

dagfinno
Copy link
Collaborator

@dagfinno dagfinno commented Feb 14, 2025

Description

Tweaked som thresholds for search, up 500ms for get dialogs and get dialog for both endusers and serviceowners.
Increased test duration from 30s to 60s to hopefully run more requests on a warmed-up system
Set up parallelism in performance job matrix, we are not using standard github-runners anymore

Related Issue(s)

Verification

  • Your code builds clean without any errors or warnings
  • Manual testing done (required)
  • Relevant automated test added (if you find this hard, leave it and we'll help out)

Documentation

  • Documentation is updated (either in docs-directory, Altinnpedia or a separate linked PR in altinn-studio-docs., if applicable)

@dagfinno dagfinno requested review from a team as code owners February 14, 2025 11:44
Copy link
Contributor

coderabbitai bot commented Feb 14, 2025

📝 Walkthrough

Walkthrough

This pull request updates several CI/CD and performance test configurations. In the yt01 workflow, the run-performance-tests job now runs with increased concurrency, includes an additional test file, and has an extended duration. A separate K6 performance workflow file has been removed. Additionally, performance thresholds in two test files have been increased from 300ms to 500ms, and the service owner search test has been streamlined by replacing a validation step with a direct utility call.

Changes

File(s) Change Summary
.github/workflows/ci-cd-yt01.yml Updated run-performance-tests job: increased max-parallel from 1 to 4, added createTransmissionsWithThresholds.js test file, and extended duration from 30s to 60s.
.github/workflows/.../workflow-run-k6-ci-cd-yt01.yml Removed the K6 performance tests workflow configuration.
tests/k6/tests/{enduser,serviceowner}/performance/*thresholds.js Updated HTTP request duration thresholds from 300ms to 500ms for specified test cases.
tests/k6/tests/serviceowner/.../serviceowner-search.js Modified import statements and streamlined logic by removing the validateTestData call in favor of directly using randomItem to select an end user.

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • oskogstad
  • arealmaas

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c7404ae and 49c5be6.

📒 Files selected for processing (5)
  • .github/workflows/ci-cd-yt01.yml (1 hunks)
  • .github/workflows/workflow-run-k6-ci-cd-yt01.yml (0 hunks)
  • tests/k6/tests/enduser/performance/enduserSearchWithThresholds.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js (1 hunks)
  • tests/k6/tests/serviceowner/performance/serviceowner-search.js (2 hunks)
💤 Files with no reviewable changes (1)
  • .github/workflows/workflow-run-k6-ci-cd-yt01.yml
🔇 Additional comments (6)
tests/k6/tests/serviceowner/performance/serviceowner-search.js (1)

2-3: LGTM! Code simplification improves efficiency.

The changes streamline the implementation by:

  1. Removing the unnecessary validation step
  2. Using the randomItem utility function directly
  3. Maintaining the same functionality with cleaner code

Also applies to: 18-18

tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js (1)

9-10:

❓ Verification inconclusive

Verify if the increased thresholds are justified.

The thresholds for both operations have been increased by 200ms (from 300ms to 500ms). While this aligns with the PR objectives, we should verify:

  1. Historical performance data supports these new thresholds
  2. The increase won't mask potential performance regressions

🏁 Script executed:

#!/bin/bash
# Description: Analyze historical performance data to validate threshold changes

# Get recent performance test results from GitHub Actions
gh run list --workflow "CI/CD YT01" --json conclusion,name,url --jq '.[] | select(.name=="CI/CD YT01" and .conclusion=="success") | .url' | head -n 5 | while read -r url; do
  echo "Analyzing run: $url"
  gh run view "$url" --log | grep -A 5 "http_req_duration{name:serviceowner search\|get dialog}"
done

Length of output: 165


Action: Verify Historical Data Manually for the New Thresholds

The PR raises the thresholds for both "serviceowner search" and "get dialog" from 300ms to 500ms. Although this matches the overall objectives, our automated retrieval did not provide historical performance data to confirm that a 500ms threshold is appropriate. Please verify manually that:

  • The new threshold of 500ms is consistent with historical performance trends.
  • The increase does not inadvertently allow performance regressions to go unnoticed.

Referenced location:
• File: tests/k6/tests/serviceowner/performance/serviceOwnerSearchWithThresholds.js, Lines 9-10

tests/k6/tests/enduser/performance/enduserSearchWithThresholds.js (1)

10-11: LGTM! Consistent threshold adjustments across user types.

The threshold increases mirror those in the service owner tests, maintaining consistency in performance expectations between different user types.

.github/workflows/ci-cd-yt01.yml (3)

123-123: LGTM! Increased parallelism improves test efficiency.

Increasing max-parallel to 4 will reduce the overall test execution time while maintaining test integrity.


135-135: LGTM! Extended duration improves test accuracy.

Doubling the test duration to 60s allows for:

  1. More requests to be processed
  2. Better statistical significance
  3. More accurate performance measurements on a warmed-up system

128-128:

✅ Verification successful

Verify the new performance test's thresholds.

The addition of createTransmissionsWithThresholds.js expands test coverage, but we should verify its thresholds are consistent with other tests.


🏁 Script executed:

#!/bin/bash
# Description: Check thresholds in the new test file

echo "Analyzing thresholds in new test file:"
rg -A 10 "thresholds:" "tests/k6/tests/serviceowner/performance/createTransmissionsWithThresholds.js"

Length of output: 649


Threshold Verification Confirmed – No Changes Needed

The new performance test file (tests/k6/tests/serviceowner/performance/createTransmissionsWithThresholds.js) correctly defines the thresholds:

  • http_req_duration{name:create dialog} is set to ["p(95)<200"]
  • http_req_duration{name:create transmission} is set to ["p(95)<200"]
  • http_reqs thresholds remain intentionally empty

These values align with the expected threshold conventions used in our other performance tests. Please confirm that this configuration meets our overall performance criteria.

✨ Finishing Touches
  • 📝 Generate Docstrings (Beta)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dagfinno dagfinno added the performance Issue related to performance label Feb 14, 2025
@dagfinno dagfinno merged commit 8b8f609 into main Feb 14, 2025
23 checks passed
@dagfinno dagfinno deleted the performance/thresholds-tweaking branch February 14, 2025 11:53
dagfinno pushed a commit that referenced this pull request Feb 14, 2025
🤖 I have created a release *beep* *boop*
---


##
[1.50.7](v1.50.6...v1.50.7)
(2025-02-14)


### Miscellaneous Chores

* **performance:** tweaking of thresholds and improvements on
performance smoketest
([#1879](#1879))
([8b8f609](8b8f609))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issue related to performance
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

2 participants