Skip to content

Commit

Permalink
Merge pull request #82 from arcadelab/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
benjamindkilleen authored Mar 13, 2023
2 parents 74a2649 + 445f980 commit a14348d
Show file tree
Hide file tree
Showing 37 changed files with 9,362 additions and 3,629 deletions.
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,9 +40,9 @@ DeepDRR requires an NVIDIA GPU, preferably with >11 GB of memory.
conda install -c conda-forge pycuda
```

to install it in your environment.
to install it in your environment.

4. You may also wish to [install PyTorch](https://pytorch.org/get-started/locally/) separately, depending on your setup.
4. You may also wish to [install PyTorch](https://pytorch.org/get-started/locally/) separately, depending on your setup.
5. Install from `PyPI`

```bash
Expand Down Expand Up @@ -116,7 +116,7 @@ DeepDRR combines machine learning models for material decomposition and scatter

![DeepDRR Pipeline](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/deepdrr_workflow.png)

Further details can be found in our MICCAI 2018 paper "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures" and the subsequent Invited Journal Article in the IJCARS Special Issue of MICCAI "Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation". The conference preprint can be accessed on arXiv here: https://arxiv.org/abs/1803.08606.
Further details can be found in our MICCAI 2018 paper "DeepDRR: A Catalyst for Machine Learning in Fluoroscopy-guided Procedures" and the subsequent Invited Journal Article in the IJCARS Special Issue of MICCAI "Enabling Machine Learning in X-ray-based Procedures via Realistic Simulation of Image Formation". The conference preprint can be accessed on arXiv here: <https://arxiv.org/abs/1803.08606>.

### Representative Results

Expand All @@ -126,13 +126,13 @@ The figure below shows representative radiographs generated using DeepDRR from C

### Applications - Pelvis Landmark Detection

We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery", also early-accepted at MICCAI'18: https://arxiv.org/abs/1803.08608 and now with quantitative evaluation in the IJCARS Special Issue on MICCAI'18: https://link.springer.com/article/10.1007/s11548-019-01975-5. The ConvNet for prediction was trained on DeepDRRs of 18 CT scans of the NIH Cancer Imaging Archive and then applied to ex vivo data acquired with a Siemens Cios Fusion C-arm machine equipped with a flat panel detector (Siemens Healthineers, Forchheim, Germany). Some representative results on the ex vivo data are shown below.
We have applied DeepDRR to anatomical landmark detection in pelvic X-ray: "X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery", also early-accepted at MICCAI'18: <https://arxiv.org/abs/1803.08608> and now with quantitative evaluation in the IJCARS Special Issue on MICCAI'18: <https://link.springer.com/article/10.1007/s11548-019-01975-5>. The ConvNet for prediction was trained on DeepDRRs of 18 CT scans of the NIH Cancer Imaging Archive and then applied to ex vivo data acquired with a Siemens Cios Fusion C-arm machine equipped with a flat panel detector (Siemens Healthineers, Forchheim, Germany). Some representative results on the ex vivo data are shown below.

![Prediction Performance](https://raw.githubusercontent.com/arcadelab/deepdrr/master/images/landmark_performance_real_data.PNG)

### Applications - Metal Tool Insertion

DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: https://arxiv.org/abs/1901.06672. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.
DeepDRR has also been applied to simulate X-rays of the femur during insertion of dexterous manipulaters in orthopedic surgery: "Localizing dexterous surgical tools in X-ray for image-based navigation", which has been accepted at IPCAI'19: <https://arxiv.org/abs/1901.06672>. Simulated images are used to train a concurrent segmentation and localization network for tool detection. We found consistent performance on both synthetic and real X-rays of ex vivo specimens. The tool model, simulation image and detection results are shown below.

This capability has not been tested in version 1.0. For tool insertion, we recommend working with [Version 0.1](https://github.com/arcadelab/deepdrr/releases/tag/0.1) for the time being.

Expand Down Expand Up @@ -223,18 +223,18 @@ For the original DeepDRR, released alongside our 2018 paper, please see the [Ver
## Acknowledgments

CUDA Cubic B-Spline Interpolation (CI) used in the projector:
https://github.com/DannyRuijters/CubicInterpolationCUDA
<https://github.com/DannyRuijters/CubicInterpolationCUDA>
D. Ruijters, B. M. ter Haar Romeny, and P. Suetens. Efficient GPU-Based Texture Interpolation using Uniform B-Splines. Journal of Graphics Tools, vol. 13, no. 4, pp. 61-69, 2008.

The projector is a heavily modified and ported version of the implementation in CONRAD:
https://github.com/akmaier/CONRAD
<https://github.com/akmaier/CONRAD>
A. Maier, H. G. Hofmann, M. Berger, P. Fischer, C. Schwemmer, H. Wu, K. Müller, J. Hornegger, J. H. Choi, C. Riess, A. Keil, and R. Fahrig. CONRAD—A software framework for cone-beam imaging in radiology. Medical Physics 40(11):111914-1-8. 2013.

Spectra are taken from MCGPU:
A. Badal, A. Badano, Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit. Med Phys. 2009 Nov;36(11): 4878–80.

The segmentation pipeline is based on the Vnet architecture:
https://github.com/mattmacy/vnet.pytorch
<https://github.com/mattmacy/vnet.pytorch>
F. Milletari, N. Navab, S-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:160604797. 2016.

We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for this research.
3 changes: 2 additions & 1 deletion deepdrr/annotations/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .line_annotation import LineAnnotation
from .fiducials import FiducialList, Fiducial

__all__ = ['LineAnnotation']
__all__ = ["LineAnnotation", "FiducialList", "Fiducial"]
148 changes: 148 additions & 0 deletions deepdrr/annotations/fiducials.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
from __future__ import annotations

import logging
from typing import List, Literal, Optional
from pathlib import Path
import numpy as np
import json
import pyvista as pv
import pandas as pd

from .. import geo, utils
from ..vol import Volume

log = logging.getLogger(__name__)


class FiducialList:
# Can be treated like a list of Point3Ds
def __init__(
self,
points: List[geo.Point3D],
world_from_anatomical: Optional[geo.FrameTransform] = None,
anatomical_coordinate_system: Literal["RAS", "LPS"] = "RAS",
):
self.points = points
self.world_from_anatomical = world_from_anatomical
self.anatomical_coordinate_system = anatomical_coordinate_system

def __getitem__(self, index):
return self.points[index]

def __len__(self):
return len(self.points)

def __iter__(self):
return iter(self.points)

def __repr__(self):
return f"FiducialList({self.points})"

def __str__(self):
return str(self.points)

def to_RAS(self) -> FiducialList:
if self.anatomical_coordinate_system == "RAS":
return self
else:
return FiducialList(
[geo.RAS_from_LPS @ p for p in self.points],
self.world_from_anatomical,
"RAS",
)

def to_LPS(self) -> FiducialList:
if self.anatomical_coordinate_system == "LPS":
return self
else:
return FiducialList(
[geo.LPS_from_RAS @ p for p in self.points],
self.world_from_anatomical,
"LPS",
)

@classmethod
def from_fcsv(
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
) -> FiducialList:
"""Load a FCSV file from Slicer3D
Args:
path (Path): Path to the FCSV file
Returns:
np.ndarray: Array of 3D points
"""
with open(path, "r") as f:
lines = f.readlines()
points = []
coordinate_system = None
for line in lines:
if line.startswith("# CoordinateSystem"):
coordinate_system = line.split("=")[1].strip()
elif line.startswith("#"):
continue
else:
x, y, z = line.split(",")[1:4]
points.append(geo.point(float(x), float(y), float(z)))

if coordinate_system is None:
log.warning("No coordinate system specified in FCSV file. Assuming LPS.")
coordinate_system = "LPS"
assert coordinate_system in ["RAS", "LPS"], "Unknown coordinate system"

return cls(
points,
world_from_anatomical=world_from_anatomical,
anatomical_coordinate_system=coordinate_system,
)

@classmethod
def from_json(
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
):
# TODO: add support for associated IDs of the fiducials. Should really be a list/dict.
data = pd.read_json(path)
control_points_table = pd.DataFrame.from_dict(
data["markups"][0]["controlPoints"]
)
coordinate_system = data["markups"][0]["coordinateSystem"]
# TODO: not sure if this works.
points = [
geo.point(*row[["x", "y", "z"]].values)
for _, row in control_points_table.iterrows()
]

return cls(
points,
world_from_anatomical=world_from_anatomical,
anatomical_coordinate_system=coordinate_system,
)

def save(self, path: Path):
raise NotImplementedError()


class Fiducial(geo.Point3D):
@classmethod
def from_fcsv(
cls,
path: Path,
world_from_anatomical: Optional[geo.FrameTransform] = None,
):
fiducial_list = FiducialList.from_fcsv(path)
assert len(fiducial_list) == 1, "Expected a single fiducial"
return cls(
fiducial_list[0].data,
world_from_anatomical=world_from_anatomical,
anatomical_coordinate_system=fiducial_list.anatomical_coordinate_system,
)

@classmethod
def from_json(
cls, path: Path, world_from_anatomical: Optional[geo.FrameTransform] = None
):
raise NotImplementedError

def save(self, path: Path):
raise NotImplementedError
30 changes: 28 additions & 2 deletions deepdrr/annotations/line_annotation.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@
log = logging.getLogger(__name__)


# TODO: make this totally independent of the Volume it corresponds to, and make a super-class for
# all annotations.


class LineAnnotation(object):
"""Really a "segment annotation", but Slicer calls it a line.
Expand Down Expand Up @@ -77,7 +81,7 @@ def world_from_anatomical(self) -> geo.FrameTransform:
return self.volume.world_from_anatomical

@classmethod
def from_markup(
def from_json(
cls,
path: str,
volume: Optional[Volume] = None,
Expand Down Expand Up @@ -132,6 +136,10 @@ def from_markup(
anatomical_coordinate_system=anatomical_coordinate_system,
)

@classmethod
def from_markup(cls, *args, **kwargs):
return cls.from_json(*args, **kwargs)

def save(
self,
path: str,
Expand Down Expand Up @@ -223,7 +231,7 @@ def to_lps(x):
"display": {
"visibility": True,
"opacity": 1.0,
"color": [0.5, 0.5, 0.5],
"color": color,
"selectedColor": color,
"activeColor": [0.4, 1.0, 0.0],
"propertiesLabelVisibility": False,
Expand Down Expand Up @@ -271,6 +279,24 @@ def endpoint_in_world(self) -> geo.Point3D:
def midpoint_in_world(self) -> geo.Point3D:
return self.world_from_anatomical @ self.startpoint.lerp(self.endpoint, 0.5)

@property
def trajectory_in_world(self) -> geo.Vector3D:
return self.endpoint_in_world - self.startpoint_in_world

@property
def direction_in_world(self) -> geo.Vector3D:
return self.trajectory_in_world.normalized()

def get_mesh(self):
"""Get the mesh in anatomical coordinates."""
u = self.startpoint
v = self.endpoint

mesh = pv.Line(u, v)
mesh += pv.Sphere(2.5, u)
mesh += pv.Sphere(2.5, v)
return mesh

def get_mesh_in_world(
self, full: bool = True, use_cached: bool = False
) -> pv.PolyData:
Expand Down
3 changes: 2 additions & 1 deletion deepdrr/device/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from .device import Device
from .carm import CArm
from .mobile_carm import MobileCArm
from .simple_device import SimpleDevice


__all__ = ["Device", "CArm", "MobileCArm"]
__all__ = ["Device", "CArm", "MobileCArm", "SimpleDevice"]
44 changes: 42 additions & 2 deletions deepdrr/device/device.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@
class Device(ABC):
"""A parent class representing X-ray device interfaces in DeepDRR.
To implement a sub class, the following methods/attributes must be implemented:
- device_from_camera3d
Attributes:
sensor_height (int): the height of the sensor in pixels.
sensor_width (int): the width of the sensor in pixels.
Expand All @@ -23,6 +27,16 @@ class Device(ABC):
source_to_detector_distance: float
world_from_device: geo.FrameTransform

@property
def detector_height(self) -> float:
"""Height of the detector in mm."""
return self.sensor_height * self.pixel_size

@property
def detector_width(self) -> float:
"""Width of the detector in mm."""
return self.sensor_width * self.pixel_size

@property
def device_from_world(self) -> geo.FrameTransform:
"""Get the FrameTransform for the device's local frame.
Expand Down Expand Up @@ -75,6 +89,21 @@ def camera3d_from_world(self) -> geo.FrameTransform:
"""
return self.camera3d_from_device @ self.device_from_world

@property
def index_from_camera3d(self) -> geo.CameraProjection:
"""Get the CameraIntrinsicTransform for the device's camera3d_from_index frame (in the current pose).
Returns:
CameraIntrinsicTransform: the "index_from_camera3d" frame transformation for the device.
"""
return geo.CameraProjection(
self.camera_intrinsics, geo.FrameTransform.identity()
)

@property
def camera3d_from_index(self) -> geo.Transform:
return self.index_from_camera3d.inv

def get_camera_projection(self) -> geo.CameraProjection:
"""Get the camera projection for the device in the current pose.
Expand All @@ -93,18 +122,29 @@ def index_from_world(self) -> geo.CameraProjection:
return self.get_camera_projection()

@property
@abstractmethod
def world_from_index(self) -> geo.Transform:
"""Get the world_from_index transform for the device in the current pose.
Returns:
Transform: the "world_from_index" transform for the device.
"""
return self.index_from_world.inv

@property
def principle_ray(self) -> geo.Vector3D:
"""Get the principle ray for the device in the current pose in the device frame.
The principle ray is the direction of the ray that passes through the center of the
image. It points from the source toward the detector.
By default, this is just the z axis, but this can be overridden by sub classes.
Returns:
Vector3D: the principle ray for the device as a unit vector.
"""
pass
principle_ray_in_camera3d = geo.v(0, 0, 1)
return self.device_from_camera3d @ principle_ray_in_camera3d

@property
def principle_ray_in_world(self) -> geo.Vector3D:
Expand Down
6 changes: 6 additions & 0 deletions deepdrr/device/mobile_carm.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,12 @@ def pose_vector_angles(pose: geo.Vector3D) -> Tuple[float, float]:


class MobileCArm(Device):
"""A C-arm imaging device with orbital movement (alpha, beta) and isocenter movement (x, y, z).
Default parameters are based on the Siemens CIOS Spin.
"""

# basic parameters which can be safely set by user, but move_by() and reposition() are recommended.
isocenter: geo.Point3D # the isocenter point in the device frame
alpha: float # alpha angle in radians
Expand Down
Loading

0 comments on commit a14348d

Please sign in to comment.