Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU usage is inaccurate if the clock speed drops below the base speed with SpeedStep #2418

Open
7kt4 opened this issue Feb 9, 2025 · 4 comments

Comments

@7kt4
Copy link

7kt4 commented Feb 9, 2025

Brief description of your issue

If Intel SpeedStep/SpeedShift is enabled and the minimum processor frequency is set to below the base clock speed, the CPU sometimes drops below it, making the CPU usage erratic. Intel SpeedStep is enabled by default on almost all modern Intel CPUs, and with Windows default power settings, this described issue occurs frequently (minimum processor state 0% = lowest frequency = 800mhz in my case)

Note: it is not problem when the CPU turbo boosts past its base speed. For example, in my case, the CPU usage is accurate whether it is running at 2.9 Ghz or 3.9.

Proof of concept:
Observe the change in the winlogon.exe process, as well as the overall CPU usage:
I realized after recording that I recorded this in Process Hacker, but the same issue is present in the latest version of System Informer too.

bandicam.2025-02-09.06-24-44-518.mp4

Steps to reproduce (optional)

Steps to reproduce:

  1. Observe a process with a constant CPU usage, or the overall CPU usage
  2. Adjust the minimum/maximum processor state with Windows power settings or (much) better, use ThrottleStop
  3. Force the CPU to operate at only below its base clock speed, and you can see that the CPU usage increases dramatically
  4. Set the CPU to go at its base clock speed and you can see it decreases again

Expected behavior (optional)

No response

Actual behavior (optional)

No response

Environment (optional)

Windows 10 Enterprise LTSC IoT 2021 (it's not windows 7 i promise)
@dmex
Copy link
Member

dmex commented Feb 10, 2025

  1. Force the CPU to operate at only below its base clock speed, and you can see that the CPU usage increases dramatically
  2. Set the CPU to go at its base clock speed and you can see it decreases again

Can you provide more context for why you believe the CPU usage is inaccurate when the frequency changes? The utilization increasing when the clock speed is lower, and deceasing when the clock speed is higher is the expected behavior.

Clock speed is the measurement of cycles the CPU executes per second. A CPU with a clock speed of 5 GHz executes 5 billion cycles per second. Low clock speed = less cycles available per second = higher utilization because processes are consuming a larger proportion of the available cycles, which are significantly lower when the clock speed is lower.

Image

@dmex dmex self-assigned this Feb 10, 2025
@7kt4
Copy link
Author

7kt4 commented Feb 10, 2025

Because with SpeedStep it constantly fluctuates below and above the base speed. It's not a fixed speed. The CPU constantly is changing its speed up and down, and as a result so are the CPU usage in the system informer. It is normal for the speed to jump anywhere between 1ghz and 4ghz every second during a normal computer operation. With speedstep it's not that the CPU clock is limited indefinitely, it's just limited until the CPU decides it needs to run at a higher frequency which it readily can (and decides this several times per second)

@dmex
Copy link
Member

dmex commented Feb 10, 2025

The CPU constantly is changing its speed up and down, and as a result so are the CPU usage in the system informer.

CPU utilization is based on the number of cycles, and clock speed is the total cycles available. The CPU usage of a process/thread is higher at a lower frequency because there are less system resources available at lower frequencies, and processes/threads are consuming a larger proportion of the available resources.

It is normal for the speed to jump anywhere between 1ghz and 4ghz every second during a normal computer operation.

Processor Clock Processor Cycles Thread Cycles Math CPU Usage
798 MHz 798,000,000 6,543,600 $Percentage = \left( \frac{6,543,600}{798,000,000} \right ) ×100$ 0.82%
2769 MHz 2,769,000,000 8,583,900 $Percentage = \left( \frac{8,583,900}{2,769,000,000} \right ) ×100$ 0.31%
Image Image

The process consumed a lower number of cycles at the lower frequency but has a higher utilization because the total available cycles were also lower.

The CPU constantly is changing its speed up and down, and as a result so are the CPU usage in the system informer.

If you reduce the clock speed of the processor to 6 MHz then the process would show 100% utilization and would be correct?

@7kt4
Copy link
Author

7kt4 commented Feb 10, 2025

I understand, and I guess that it's just a matter of opinion.

My point was that, since people treat CPU usage as a measure of its potential capacity, it would make more sense to measure it using the clock speed that it would increase to under load to rather than whatever lower clock speed it's lowered to at a given time. This is how Windows task manager works, for example.

I also don't understand why this behavior is only present when the CPU speed is below its base clock. If the speed is above the base clock with Turbo Boost, then it is still calculated using only the base clock and not the actual clock speed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants