Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨Feature Request - support for generating coverage reports #17

Open
briantist opened this issue Feb 25, 2023 · 7 comments
Open

✨Feature Request - support for generating coverage reports #17

briantist opened this issue Feb 25, 2023 · 7 comments

Comments

@briantist
Copy link

It would be really cool if we cool if we could generate coverage reports with monkeyble so we could see coverage in our CI/CD

@Sispheor
Copy link
Contributor

Hi,
Actually this kind of feature is mostly present in unit testing of real language. Because test are really independent.
In the context of monkeyble we it's end to end. So I'm not sure it would be pertinent to give this info as we cross the full playbook to validate a scenario. It means that, even if a task is skipped, you pass through it.

What do you have in mind exactly?

@briantist
Copy link
Author

Sure that's a good point, and I'm not sure I have a lot of specifics in mind. But for example on a given run, skipping a task would probably be the equivalent of a line not being tested. Then ideally the combination of scenarios we test does end up covering all paths.

I suppose conditionals get a bit complex. If a task has a conditional, we want to test it with both true (does execute) and false (skips), so in that case I guess:

  • a skip that's not because of a when: conditional is a miss
  • a skip due to a false conditional is a partial
  • an execution due to a true conditional is a partial
  • the combination of the above two is fully tested

I don't think we'd need to dig into the particulars of complicated conditionals (when with a list for example), for this purpose I think we only care about the final result of it, if that makes sense.

@Sispheor
Copy link
Contributor

And so Monkeyble already answer to that. You can test both scenario with true or false.

@briantist
Copy link
Author

And so Monkeyble already answer to that. You can test both scenario with true or false.

Right exactly, the idea is to generate some standard coverage report that we use in coverage tools like https://codecov.io , that makes it easier to see if we missed some pathways or something and want to test more scenarios

@Sispheor
Copy link
Contributor

The problem is that Monkeyble works as a callback plugin. It's not aware of a side executions.
So we cannot know that we passed in a particular task before and that the result of the condition was true or false.

@briantist
Copy link
Author

The problem is that Monkeyble works as a callback plugin. It's not aware of a side executions. So we cannot know that we passed in a particular task before and that the result of the condition was true or false.

That makes sense, I'm not suggesting that a single report needs to be created that takes into account all of the other scenarios, rather each run would only have to be concerned with what that run covered, multiple scenarios results in multiple reports, and the coverage tool takes all of those into account.

I think that's how it usually works with other tools, though I am a bit out of my depth on the particulars.

@dmsimard
Copy link

The problem is that Monkeyble works as a callback plugin. It's not aware of a side executions.

I can relate to the limitations of running within a callback interface, even though what we can do is sometimes surprising :p

@briantist it's probably not what you are looking for but once playbook task results are recorded in ara the data is available to query over its API.

If the data you need is in the results, you could find them and use them that way.
For example:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants