Skip to content

Logging init config logic minor modifications #48

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

li-ruihao
Copy link
Contributor

@li-ruihao li-ruihao commented Mar 20, 2025

Description

Logic changes to logging initialization and configuration in application/__init__.py. This was done to fix PR#42 (merged). Apparently, the logger initialization in PR#42 leads to ambiguous behaviours because the logger is retrieved from the cache upon executing a logging statement. Since the logger is declared and initialized before any logging configurations, i.e dictConfig in the successful case and the basicConfig in the failed case, logging statements actually end up not picking up these configurations because of the previously cachced logger. This PR makes minor modifications to ensure that the logger picks up the correct configurations after loading the configurations.

Summary by CodeRabbit

Summary by CodeRabbit

  • Chores
    • Enhanced application logging routines for consistent error handling, ensuring reliable recording of diagnostic information. This improvement contributes to the overall stability and reliability of the application, ultimately leading to a smoother user experience.

Copy link

coderabbitai bot commented Mar 20, 2025

Walkthrough

The changes adjust the logging configuration in application/__init__.py. The logger variable is now declared as a global and is consistently initialized in both the successful configuration load and exception handling paths. After attempting to load the configuration from log_config.yaml, the code initializes the logger with logging.getLogger(__name__) and logs the outcome appropriately.

Changes

File(s) Change Summary
application/init.py Declares logger as a global variable; moves logger initialization after attempting to load log_config.yaml; ensures logger is set in both try and exception branches.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant LogConfig as log_config.yaml
    participant Logging as Logging Library

    Note over App: Start logging setup
    App->>LogConfig: Attempt to load configuration
    alt Configuration loaded successfully
        App->>Logging: logging.getLogger(__name__)
        App->>Logging: logger.warning("Logging setup complete")
    else Exception occurs
        Logging-->>App: Exception raised
        App->>Logging: logging.basicConfig(...)
        App->>Logging: logging.getLogger(__name__)
        App->>Logging: logger.exception("Exception in loading logging config")
    end
Loading

Possibly related PRs

Suggested reviewers

  • junhaoliao

Poem

I'm a bunny coding with delight,
Hopping through logs, day and night.
Global logger now stands so tall,
Catching errors, both big and small.
With every line of code, a joyful leap,
Celebrating fixes as I softly peep!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 462ac75 and 54bee53.

📒 Files selected for processing (1)
  • application/__init__.py (1 hunks)
🔇 Additional comments (3)
application/__init__.py (3)

35-35: Good change to fix the logger initialization timing.

Moving the logger initialization after the configuration is applied ensures the logger uses the proper configuration. This addresses the issue where the logger was being cached before configurations were applied.


39-40: Well-structured exception handling.

Initializing the logger in the exception branch is consistent with the success path and ensures that exception messages are properly logged with the fallback configuration.


35-42:

❓ Verification inconclusive

Verify logger behavior with unit tests.

The changes look good, but it would be beneficial to add unit tests that verify the logger's behavior with different configuration scenarios to ensure the fixes work as expected.


🌐 Web query:

How to test Python logging configuration in unit tests?

💡 Result:

To effectively test Python logging configurations in unit tests, developers can leverage built-in testing utilities from both the standard library and third-party frameworks. Here are the primary approaches:

Using unittest's assertLogs

The unittest.TestCase class provides an assertLogs context manager to capture log messages during tests:

import unittest
import logging

class LoggingTest(unittest.TestCase):
    def test_log_messages(self):
        logger = logging.getLogger('test_logger')
        with self.assertLogs(logger, level='DEBUG') as cm:
            logger.debug('Debug message')
            logger.error('Error message')
        
        self.assertEqual(cm.output, [
            'DEBUG:test_logger:Debug message',
            'ERROR:test_logger:Error message'
        ])

Key features:

  • Captures all log messages at or above specified level
  • Returns a LogCapture object containing output (formatted messages) and records (raw log records)
  • Supports testing logger names and log levels[1][4][8]

Using pytest's caplog Fixture

For pytest users, the caplog fixture provides comprehensive logging test capabilities:

def test_log_messages(caplog):
    caplog.set_level(logging.INFO)
    logger = logging.getLogger('test_logger')
    
    logger.info('Information message')
    logger.warning('Warning message')
    
    assert len(caplog.records) == 2
    assert caplog.records[0].message == 'Information message'
    assert 'Warning' in caplog.text

Key features:

  • Access raw records through caplog.records
  • Get formatted text via caplog.text
  • Set different log levels using caplog.set_level()[4][6]

Best Practices

  1. Test Message Formatting: Verify formatted messages include variables correctly
logger.info('User %s logged in', 'john')
assert 'User john logged in' in caplog.text[4]
  1. Check Log Levels: Confirm appropriate severity levels are used
assert caplog.records[0].levelname == 'ERROR'[4]
  1. Test Multiple Handlers: Use logging.NullHandler or mock handlers to verify handler configurations[2]

  2. Avoid Over-Mocking: Prefer capturing actual log output instead of mocking logger methods to test full logging pipeline[4]

Approach Framework Advantages Limitations
assertLogs unittest Native integration, context manager Requires class inheritance
caplog fixture pytest Flexible level control, rich API Requires pytest
Custom log handler Any Full control over log destinations More setup required[2]

For complex logging scenarios (e.g. structured logging), consider using dedicated testing libraries like Eliot[5]. When testing logging configuration itself, verify handler registration and formatters by inspecting the logger's handlers and their configurations[7].

Citations:


Unit Test Logger Behavior

The changes in the logger setup look solid. However, to ensure robust behavior in all configuration scenarios, please add unit tests that verify the logger's output. For example:

  • Exception path: When an exception is raised during logger setup, validate that the fallback configuration is applied and that an error is logged via logger.exception("Logging setup failed").
  • Success path: When no exception occurs, confirm that the logger issues a warning with the expected message format (e.g., "Logging setup is completed with config=%s").

You can leverage testing utilities such as unittest.TestCase’s assertLogs context manager or pytest’s caplog fixture to capture and assert log messages. This will provide confidence that the logger behaves as intended under both successful and failure scenarios.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 54bee53 and f5ba40f.

📒 Files selected for processing (1)
  • application/__init__.py (1 hunks)
🔇 Additional comments (2)
application/__init__.py (2)

33-33: LGTM - Proper logger initialization after configuration.

The logger is now correctly initialized after the logging configuration has been applied via dictConfig. This ensures the logger adopts the intended configuration rather than using cached settings.


37-37: LGTM - Consistent logger initialization in exception handler.

Good implementation of logger initialization in the exception handling block, ensuring the logger is properly configured even when loading the YAML configuration fails.

except Exception:
# Fallback to a basic configuration
logging.basicConfig(format='%(asctime)s %(levelname)s [%(name)s:%(lineno)d] %(message)s', level=logging.INFO, force=True)
logger = logging.getLogger(__name__)
logger.exception("Logging setup failed")
else:
logger.warning("Logging setup is completed with config=%s", config)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider limiting logged configuration details.

Logging the entire configuration object could potentially expose sensitive information (like log file paths, credentials in custom handlers) and create verbose log entries.

- logger.warning("Logging setup is completed with config=%s", config)
+ logger.warning("Logging setup completed successfully")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
logger.warning("Logging setup is completed with config=%s", config)
logger.warning("Logging setup completed successfully")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants