Skip to content

DeepTrack2 is a modular Python library for generating, manipulating, and analyzing image data pipelines for machine learning and experimental imaging.

License

Notifications You must be signed in to change notification settings

DeepTrackAI/DeepTrack2

Repository files navigation

DeepTrack2 - A comprehensive deep learning framework for digital microscopy.

PyPI version Python version

InstallationGetting StartedExamplesAdvanced TutorialsDeveloper TutorialsCite usLicense

DeepTrack2 is a modular Python library for generating, manipulating, and analyzing image data pipelines for machine learning and experimental imaging.

TensorFlow Compatibility Notice: DeepTrack2 version 2.0 and subsequent do not support TensorFlow. If you need TensorFlow support, please install the legacy version 1.7.

The following quick start guide is intended for complete beginners to understand how to use DeepTrack2, from installation to training your first model. Let's get started!

Installation

DeepTrack2 2.0 requires at least python 3.9.

To install DeepTrack2, open a terminal or command prompt and run:

pip install deeptrack

or

python -m pip install deeptrack

This will automatically install the required dependencies.

Getting Started

Here you find a series of notebooks that give you an overview of the core features of DeepTrack2 and how to use them:

Examples

These are examples of how DeepTrack2 can be used on real datasets:

  • DTEx201 MNIST

    Training a fully connected neural network to identify handwritten digits using MNIST dataset.

  • DTEx202 Single Particle Tracking

    Tracks experimental videos of a single particle. (Requires opencv-python compiled with ffmpeg)



  • DTEx203 Multi-Particle tracking

  • Detecting quantum dots in a low SNR image.



  • DTEx204 Particle Feature Extraction

  • Extracting the radius and refractive index of particles.

  • DTEx205 Cell Counting

    Counting the number of cells in fluorescence images.

  • DTEx206 3D Multi-Particle tracking

    Tracking multiple particles in 3D for holography.

  • DTEx207 GAN image generation

    Using a GAN to create cell image from masks.

Specific examples for label-free particle tracking using LodeSTAR:

  • DTEx231A LodeSTAR Autotracker Template

  • DTEx231B LodeSTAR Detecting Particles of Various Shapes

  • DTEx231C LodeSTAR Measuring the Mass of Particles in Holography

  • DTEx231D LodeSTAR Detecting the Cells in the BF-C2DT-HSC Dataset

  • DTEx231E LodeSTAR Detecting the Cells in the Fluo-C2DT-Huh7 Dataset

  • DTEx231F LodeSTAR Detecting the Cells in the PhC-C2DT-PSC Dataset

  • DTEx231G LodeSTAR Detecting Plankton

  • DTEx231H LodeSTAR Detecting in 3D Holography

  • DTEx231I LodeSTAR Measuring the Mass of Simulated Particles

  • DTEx231J LodeSTAR Measuring the Mass of Cells

Specific examples for graph-neural-network-based particle linking and trace characterization using MAGIK:

  • DTEx241A MAGIK Tracing Migrating Cells


  • DTEx241B MAGIK to Track HeLa Cells

Advanced Tutorials

Developer Tutorials

Here you find a series of notebooks tailored for DeepTrack2's developers:

Documentation

The detailed documentation of DeepTrack2 is available at the following link: https://deeptrackai.github.io/DeepTrack2

Cite us!

If you use DeepTrack 2.1 in your project, please cite us:

https://pubs.aip.org/aip/apr/article/8/1/011310/238663

"Quantitative Digital Microscopy with Deep Learning."
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

See also:

https://nostarch.com/deep-learning-crash-course

Deep Learning Crash Course
Benjamin Midtvedt, Jesús Pineda, Henrik Klein Moberg, Harshith Bachimanchi, Joana B. Pereira, Carlo Manzo & Giovanni Volpe.
2025, No Starch Press (San Francisco, CA)
ISBN-13: 9781718503922

https://www.nature.com/articles/s41467-022-35004-y

"Single-shot self-supervised object detection in microscopy." 
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt & Giovanni Volpe
Nature Communications, volume 13, article number 7492 (2022).

https://www.nature.com/articles/s42256-022-00595-0

"Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion."
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe & Carlo Manzo
Nature Machine Intelligence volume 5, pages 71–82 (2023).

https://doi.org/10.1364/OPTICA.6.000506

"Digital video microscopy enhanced by deep learning."
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Funding

This work was supported by the ERC Starting Grant ComplexSwimmers (Grant No. 677511), the ERC Starting Grant MAPEI (101001267), and the Knut and Alice Wallenberg Foundation.