Inspired by the MTNeuro Benchmark Dataset found here, the Kasthuri Challenge introduces new annotations of synapses and membranes of neural tissue from a mouse cortex. BossBD, an open source volumetric database for 3D and 4D neuroscience data and Neuroglancer were used to derive annotations from the dataset.
In the past decade, there have been major pursuits in understanding large scale neuroanatomical structures in the brain. With such ventures, there has been a surplus amount of brain data that can potentially reveal different phenomenons about brain organization. Currently, many machine and deep learning tools are being pursued, however there is still a need for new standards for understanding these large scale brain datasets. To access this challenge, we introduce a new dataset, annotations, and tasks that provide a diverse approach to readout information about brain structure and architecture. We adapted a previous multitask neuroimaging benchmark (MTNeuro) of a volumetric, micrometer-resolution X-ray microtomography image spanning a large thalamocortical section of a mouse brain as a baseline for our challenge. Our new standardized challenge (Kasthuri Challenge) aims to generate annotations of a saturated reconstruction of a sub-volume of mouse neocortex imaged with a scanning electron microscope. Specifically, annotations of synapses and membranes are the regions of interest as they provide the best results and insights of how machine and deep learning are able pinpoint the unique connectivity at the microstructure level. Datasets, code, and pre-trained baseline models are provided at: TBD
The dataset contains high-resolution images from the mouse cortex acquired with a spatial resolution of 3x3x30 cubic nanometers. The total size of the dataset amounts to a whopping 660GB of images.
This volumetric dataset provides detailed reconstructions of a sub-volume of mouse neocortex, encompassing all cellular objects like axons, dendrites, and glia, as well as numerous sub-cellular components. Notable among these are synapses, synaptic vesicles, spines, spine apparati, postsynaptic densities, and mitochondria.
By leveraging this dataset, the research team made significant discoveries into the structural intricacies of neural tissue at nanometer resolution. A key revelation was the refutation of Peters’ rule. This was achieved by tracing the pathways of all excitatory axons and examining their juxtapositions with every dendritic spine, thereby dispelling the notion that simple physical proximity suffices to predict synaptic connectivity.
The dataset and its associated labels are hosted publicly on BossDB. To access the data, you can utilize the Python API library, Intern. For anonymous read-only access, use the username "public-access" and password "public".
More details can be found in the cited paper below.
Kasthuri Challenge aims to generate annotations of a saturated reconstruction of a sub-volume of mouse neocortex imaged with a scanning electron microscope. Specifically, annotations of synapses and membranes are the regions of interest as they provide the best results and insights of how machine and deep learning are able pinpoint the unique connectivity at the microstructure level.
To get started, clone this repository, change into the directory
Run the following command to create a virtual environment named kasthuri_env
:
python3 -m venv kasthuri_env
Activate the virtual environment:
source kasthuri_env/bin/activate
Now, navigate to the directory where you have cloned the Kasthuri repository and run:
pip3 install -e ./
pip3 install -r requirements.txt
The code has been tested with
- Python >= 3.8
- PIP == 22.1.2
- torch == 1.11.0
- torchvision == 0.12.0
- numpy == 1.19.3
For users with a compatible NVIDIA GPU, you can leverage accelerated training and inference using PyTorch on CUDA. Follow these steps for a seamless GPU setup:
-
Install NVIDIA CUDA Toolkit 11.6:
Before proceeding with PyTorch installation, ensure the NVIDIA CUDA Toolkit 11.6 is set up on your system. Follow the official guide to install the correct version. -
Install NVIDIA cuDNN compatible with CUDA 11.6:
After setting up CUDA, install cuDNN, NVIDIA's GPU-accelerated library for deep neural networks. Find the installation steps in the official NVIDIA documentation.
- Open
setup.py
. - Look for the
install_requires
section. - Comment out or remove the existing
torch
line. - Add the specific version of torch for GPU support:
torch==1.11.0+cu116
- Navigate to the directory where you have cloned the Kasthuri repository.
- Install the Kasthuri package:
pip3 install -e ./
- Install the dependencies in
requirements.txt
:pip3 install -r requirements.txt
After installation, verify if PyTorch recognizes your GPU:
import torch
print(torch.cuda.is_available())
kasthuri
- main code folderbossdbdataset.py
- Pytorch datasetnetworkconfig
- JSON configurations for individual network runstaskconfig
- JSON configurations for membrane and synapse tasks
notebooks
- visualization and download notebooksscripts
- main execution scripts for each task
Code for executing training and evaluation for baseline networks are provided for each task in the scripts
folder.
These can all be run as scripts\script_name
from the main repository folder.
These can be reconfigured for different networks using the configuration files in networkconfig
.
This is the easiest way to build on the example code for network development. A pytorch dataset is provided bossdbdataloader and used in our example scripts.
Instructions for adapting our test scripts for a new model are found in here.
To get started running examples, files in the scripts directory can be run following this example
python3 scripts/task_membrane.py
or
python3 scripts/task_synapse.py
Within the networkconfig
directory, you can find the following configuration files:
UNet_2D.json
UNet_2D_attention.json
- - attention mechanisms to focus on specific regions of interest.UNet_2D_depth.json
- increased model depth capturing complex patterns in the data.UNet_2D_residual.json
- residual connections to mitigate the vanishing gradient problem and enhance feature propagation.
You can select any of these configurations using the --network
argument when running the scripts.
To specify a different network configuration, use the --network
argument like so:
python3 scripts/task_membrane.py --network UNet_2D_attention.json
python3 scripts/task_synapse.py --network UNet_2D_attention.json
and will load default configuration scripts and public authentication credentials. The training script will output trained network weights as a 'pt' file, and produce output figures.
Access notebooks notebooks for each task and run cell by cell. The code, by default, saves the cutouts as numpy tensors.
Pretained Notebooks are used for inferencing using our pretrained model for each model variation. Install AWS CLI to download pretrained model weights.
If you find this project useful in your research, please cite the following paper!
- Kasthuri N, Hayworth KJ, Berger DR, Schalek RL, Conchello JA, Knowles-Barley S, Lee D, Vázquez-Reina A, Kaynig V, Jones TR, Roberts M, Morgan JL, Tapia JC, Seung HS, Roncal WG, Vogelstein JT, Burns R, Sussman DL, Priebe CE, Pfister H, Lichtman JW. Saturated Reconstruction of a Volume of Neocortex. Cell. 2015 Jul 30;162(3):648-61. doi: 10.1016/j.cell.2015.06.054. PMID: 26232230.
Thank you to the Benchmark Team - Travis Latchman, Tanvir Grewal, Ashley Pattammady, Kim Barrios, Erik Johnson, and the MTNeuro team!
Use or redistribution of the Boss system in source and/or binary forms, with or without modification, are permitted provided that the following conditions are met:
- Redistributions of source code or binary forms must adhere to the terms and conditions of any applicable software licenses.
- End-user documentation or notices, whether included as part of a redistribution or disseminated as part of a legal or scientific disclosure (e.g. publication) or advertisement, must include the following acknowledgement: The Boss software system was designed and developed by the Johns Hopkins University Applied Physics Laboratory (JHU/APL).
- The names "The Boss", "JHU/APL", "Johns Hopkins University", "Applied Physics Laboratory", "MICrONS", or "IARPA" must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact BossAdmin@jhuapl.edu.
- This source code and library is distributed in the hope that it will be useful, but is provided without any warranty of any kind.