-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #62 from BrainLesion/performance_test
Performance test
- Loading branch information
Showing
24 changed files
with
7,395 additions
and
86 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -160,3 +160,5 @@ cython_debug/ | |
#.idea/ | ||
|
||
.DS_Store | ||
|
||
.vscode |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,63 +1,59 @@ | ||
[![PyPI version panoptica](https://badge.fury.io/py/panoptica.svg)](https://pypi.python.org/pypi/panoptica/) | ||
|
||
# Panoptica | ||
# panoptica | ||
|
||
Computing instance-wise segmentation quality metrics for 2D and 3D semantic- and instance segmentation maps. | ||
|
||
## Features | ||
|
||
The package provides 3 core modules: | ||
The package provides three core modules: | ||
|
||
1. Instance Approximator: instance approximation algorithms in panoptic segmentation evaluation. Available now: connected components algorithm. | ||
1. Instance Matcher: instance matching algorithm in panoptic segmentation evaluation, to align and compare predicted instances with reference instances. | ||
1. Instance Evaluator: Evaluation of panoptic segmentation performance by evaluating matched instance pairs and calculating various metrics like true positives, Dice score, IoU, and ASSD for each instance. | ||
1. Instance Approximator: instance approximation algorithms to extract instances from semantic segmentation maps/model outputs. | ||
2. Instance Matcher: matches predicted instances with reference instances. | ||
3. Instance Evaluator: computes segmentation and detection quality metrics for pairs of predicted - and reference segmentation maps. | ||
|
||
![workflow_figure](https://github.com/BrainLesion/panoptica/blob/main/examples/figures/workflow.png?raw=true) | ||
|
||
## Installation | ||
|
||
The current release requires python 3.10. To install it, you can simply run: | ||
With a Python 3.10+ environment, you can install panoptica from [pypi.org](https://pypi.org/project/panoptica/): | ||
|
||
```sh | ||
pip install panoptica | ||
``` | ||
|
||
## Use Cases | ||
## Use cases and tutorials | ||
|
||
All use cases have tutorials showcasing the usage that can be found at [BrainLesion/tutorials/panoptica](https://github.com/BrainLesion/tutorials/tree/main/panoptica). | ||
For tutorials featuring various use cases, cf. [BrainLesion/tutorials/panoptica](https://github.com/BrainLesion/tutorials/tree/main/panoptica). | ||
|
||
### Semantic Segmentation Input | ||
|
||
<img src="https://github.com/BrainLesion/panoptica/blob/main/examples/figures/semantic.png?raw=true" alt="semantic_figure" height="300"/> | ||
|
||
Although for many biomedical segmentation problems, an instance-wise evaluation is highly relevant and desirable, they are still addressed as semantic segmentation problems due to lack of appropriate instance labels. | ||
|
||
Modules [1-3] can be used to obtain panoptic metrics of matched instances based on a semantic segmentation input. | ||
Although an instance-wise evaluation is highly relevant and desirable for many biomedical segmentation problems, they are still addressed as semantic segmentation problems due to the lack of appropriate instance labels. | ||
|
||
[Jupyter Notebook Example](https://github.com/BrainLesion/tutorials/tree/main/panoptica/example_spine_semantic.ipynb) | ||
|
||
This tutorial leverages all three modules. | ||
|
||
### Unmatched Instances Input | ||
|
||
<img src="https://github.com/BrainLesion/panoptica/blob/main/examples/figures/unmatched_instance.png?raw=true" alt="unmatched_instance_figure" height="300"/> | ||
|
||
It is a common issue that instance segementation outputs have good segmentations with mismatched labels. | ||
|
||
For this case modules [2-3] can be utilized to match the instances and report panoptic metrics. | ||
It is a common issue that instance segmentation outputs feature good outlines but mismatched instance labels. | ||
For this case, modules 2 and 3 can be utilized to match the instances and report metrics. | ||
|
||
[Jupyter Notebook Example](https://github.com/BrainLesion/tutorials/tree/main/panoptica/example_spine_unmatched_instance.ipynb) | ||
|
||
### Matched Instances Input | ||
|
||
<img src="https://github.com/BrainLesion/panoptica/blob/main/examples/figures/matched_instance.png?raw=true" alt="matched_instance_figure" height="300"/> | ||
|
||
Ideally the input data already provides matched instances. | ||
|
||
In this case module 3 can be used to directly report panoptic metrics without requiring any internal preprocessing. | ||
|
||
[Jupyter Notebook Example](https://github.com/BrainLesion/tutorials/tree/main/panoptica/example_spine_matched_instance.ipynb) | ||
If your predicted instances already match the reference instances, you can directly compute metrics with the third module, see [Jupyter Notebook](https://github.com/BrainLesion/tutorials/tree/main/panoptica/example_spine_matched_instance.ipynb) Example](https://github.com/BrainLesion/tutorials/tree/main/panoptica/example_spine_matched_instance.ipynb) | ||
|
||
## Citation | ||
|
||
If you have used panoptica in your research, please cite us! | ||
If you use panoptica in your research, please cite it to support the development! | ||
|
||
TBA | ||
|
||
The citation can be exported from: _TODO_ | ||
``` | ||
upcoming citation | ||
``` |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
import pandas as pd | ||
import seaborn as sns | ||
import matplotlib.pyplot as plt | ||
from auxiliary.turbopath import turbopath | ||
|
||
# List of CSV files | ||
csv_list = [ | ||
"benchmark/m1max/dataframe.csv", | ||
"benchmark/ryzen9/dataframe.csv", | ||
] | ||
|
||
# Use list comprehension to read and concatenate CSV files | ||
concatenated_df = pd.concat([pd.read_csv(csv) for csv in csv_list], ignore_index=True) | ||
|
||
# Set custom colors | ||
colors = ["#1AFF1A", "#941AFF"] | ||
|
||
title_font_size = 16 | ||
label_font_size = 14 | ||
tick_font_size = 12 | ||
|
||
# Get unique platforms and conditions | ||
unique_platforms = concatenated_df["platform"].unique() | ||
unique_conditions = concatenated_df["condition"].unique() | ||
|
||
# Create subplots for each condition | ||
fig, axes = plt.subplots( | ||
nrows=len(unique_conditions), ncols=3, figsize=(18, 6 * len(unique_conditions)) | ||
) | ||
|
||
for i, condition in enumerate(unique_conditions): | ||
# Filter data for the current condition | ||
condition_df = concatenated_df[concatenated_df["condition"] == condition] | ||
|
||
# Boxplot for Approximation Time | ||
sns.boxplot( | ||
x="platform", | ||
y="approximation", | ||
data=condition_df, | ||
notch=False, | ||
ax=axes[i, 0], | ||
palette=dict(zip(unique_platforms, colors)), | ||
linewidth=2, # Increase the linewidth | ||
) | ||
axes[i, 0].set_title( | ||
f"Approximation Time Comparison - {condition}", fontsize=title_font_size | ||
) | ||
axes[i, 0].set_xlabel("Platform", fontsize=label_font_size) | ||
axes[i, 0].set_ylabel("Approximation Time (s)", fontsize=label_font_size) | ||
axes[i, 0].tick_params(axis="both", which="major", labelsize=tick_font_size) | ||
|
||
# Boxplot for Matching Time | ||
sns.boxplot( | ||
x="platform", | ||
y="matching", | ||
data=condition_df, | ||
notch=False, | ||
ax=axes[i, 1], | ||
palette=dict(zip(unique_platforms, colors)), | ||
linewidth=2, # Increase the linewidth | ||
) | ||
axes[i, 1].set_title( | ||
f"Matching Time Comparison - {condition}", fontsize=title_font_size | ||
) | ||
axes[i, 1].set_xlabel("Platform", fontsize=label_font_size) | ||
axes[i, 1].set_ylabel("Matching Time (s)", fontsize=label_font_size) | ||
axes[i, 1].tick_params(axis="both", which="major", labelsize=tick_font_size) | ||
|
||
# Boxplot for Evaluation Time | ||
sns.boxplot( | ||
x="platform", | ||
y="evaluation", | ||
data=condition_df, | ||
notch=False, | ||
ax=axes[i, 2], | ||
palette=dict(zip(unique_platforms, colors)), | ||
linewidth=2, # Increase the linewidth | ||
) | ||
axes[i, 2].set_title( | ||
f"Evaluation Time Comparison - {condition}", fontsize=title_font_size | ||
) | ||
axes[i, 2].set_xlabel("Platform", fontsize=label_font_size) | ||
axes[i, 2].set_ylabel("Evaluation Time (s)", fontsize=label_font_size) | ||
axes[i, 2].tick_params(axis="both", which="major", labelsize=tick_font_size) | ||
|
||
# Save the entire figure after all subplots have been created | ||
plt.tight_layout() | ||
file_name = turbopath(__file__).parent + "/boxplot_times.eps" | ||
|
||
# Save with separate SVG formatter to include colors | ||
fig.savefig(file_name, format="eps", bbox_inches="tight") | ||
plt.close() | ||
|
||
print("Plots saved successfully as EPS.") |
Oops, something went wrong.