Replies: 1 comment 5 replies
-
I think the short answer here is "no" we don't have a test/demonstration requirement for these claims, at least in the rules I've seen. I think it's a good question though. Even this case isn't so obvious. I didn't know, for example, that One problem with including these tests is that a lot of these benchmarks will be context-dependent. Maybe that's an argument against using efficiency as a rule justification in general, but I think it's more like any performance-related rules come with the caveat that you should benchmark your specific use case. On the other hand, if the efficiency claims are outright wrong, then it definitely shouldn't be mentioned in the docs. I don't think that's the case for SIM118, though, based on some tiny benchmarks I just ran and also looking at the CPython source code. |
Beta Was this translation helpful? Give feedback.
-
Are there any quantifiable tests done before stating that a rule has a performance impact on code execution? Stating that some way is more efficient than the other doesn't feel like enough information to make an informed decision.
For example SIM118 states:
flake-simplify
doesn't specify that it is more efficient thankey in dict.keys()
, while StackOverflow comment which is used as cause for the rule states the following:Which is purely theoretical speculation, although having a valid basis.
To be clear, I am not trying to argue if this specific rule has or has not performance impact. I'm advocating for including a reason/proof of performance impact in the "Why is this bad?" section to allow an informed decision.
Beta Was this translation helpful? Give feedback.
All reactions