Skip to content

Commit

Permalink
Added notebook example to load activation maps.
Browse files Browse the repository at this point in the history
  • Loading branch information
ArashAkbarinia committed Dec 21, 2023
1 parent e317457 commit c1eeb73
Show file tree
Hide file tree
Showing 7 changed files with 528 additions and 19 deletions.
18 changes: 18 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,27 @@
'sphinx.ext.doctest',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
'sphinx.ext.mathjax',
'myst_nb'
]

myst_enable_extensions = [
"amsmath",
# "attrs_inline",
# "colon_fence",
# "deflist",
"dollarmath",
# "fieldlist",
# "html_admonition",
# "html_image",
# "linkify",
# "replacements",
# "smartquotes",
# "strikethrough",
# "substitution",
# "tasklist",
]

templates_path = ['_templates']
exclude_patterns = []

Expand Down
1 change: 1 addition & 0 deletions docs/source/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@ In the following notebooks, we show different examples of how to use :code:`oscu

.. toctree::
notebooks/quick_start
notebooks/activation_maps
notebooks/odd_one_out
:maxdepth: 1
492 changes: 492 additions & 0 deletions docs/source/notebooks/activation_maps.ipynb

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/source/notebooks/odd_one_out.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
"id": "2b21aabc-b485-4f2f-9ed2-3340ef923db1",
"metadata": {},
"source": [
"## Prtrained features\n",
"## Pretrained features\n",
"\n",
"Let's create a linear classifier on top of the extracted features from a pretrained network to \n",
"perform a **4AFC odd-one-out (OOO)** task (i.e., which image out of four options is the \"odd\" one). \n",
Expand All @@ -69,7 +69,7 @@
"metadata": {},
"outputs": [],
"source": [
"architecture = 'vit_b_32' # networks' architecture\n",
"architecture = 'vit_b_32' # network's architecture\n",
"weights = 'vit_b_32' # the pretrained weights\n",
"img_size = 224 # network's input size\n",
"layer = 'block7' # the readout layer\n",
Expand Down
4 changes: 2 additions & 2 deletions docs/source/notebooks/quick_start.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@
"id": "2d0848b0-b0b8-4def-8bac-684e060b8623",
"metadata": {},
"source": [
"## Prtrained features\n",
"## Pretrained features\n",
"\n",
"Let's create a linear classifier on top of the extracted features from a pretrained network to \n",
"perform a binary classification task (i.e., 2AFC – two-alternative-force-choice). This is easily \n",
Expand All @@ -71,7 +71,7 @@
"metadata": {},
"outputs": [],
"source": [
"architecture = 'resnet50' # networks' architecture\n",
"architecture = 'resnet50' # network's architecture\n",
"weights = 'resnet50' # the pretrained weights\n",
"img_size = 224 # network's input size\n",
"layer = 'block0' # the readout layer\n",
Expand Down
27 changes: 13 additions & 14 deletions docs/source/notebooks/usage.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,12 @@
"source": [
"# Usage\n",
"\n",
"This notebook shows how to use the `osculari` package.\n",
"This notebook demonstrates how to use the `osculari` package.\n",
"\n",
"The `osculari` package consists of three main `modules`:\n",
"* `models`: to readout pretrained networks and add linear layers on top of them.\n",
"* `datasets`: to create datasets and dataloaders to train and test linear probes.\n",
"* `paradigms`: to implement psychophysical paradigms to experiment with deep networks."
"The `osculari` package is organized into three main `modules`:\n",
"* `models`: Used for reading pretrained networks and adding linear layers on top of them.\n",
"* `datasets`: Used to create datasets and dataloaders for training and testing linear probes.\n",
"* `paradigms`: Used to implement psychophysical paradigms for experimenting with deep networks."
]
},
{
Expand Down Expand Up @@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 73,
"execution_count": 2,
"id": "e74f3e20-bb57-4511-baf7-d18da5cb38ed",
"metadata": {},
"outputs": [
Expand Down Expand Up @@ -168,8 +168,7 @@
" 'deeplabv3_resnet101',\n",
" 'deeplabv3_resnet50',\n",
" 'fcn_resnet101',\n",
" 'fcn_resnet50',\n",
" 'lraspp_mobilenet_v3_large'],\n",
" 'fcn_resnet50'],\n",
" 'taskonomy': ['taskonomy_autoencoding',\n",
" 'taskonomy_class_object',\n",
" 'taskonomy_class_scene',\n",
Expand Down Expand Up @@ -206,7 +205,7 @@
" 'clip_ViT-L/14@336px']}"
]
},
"execution_count": 73,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
Expand Down Expand Up @@ -242,7 +241,7 @@
" - Downloadable URL of the pretrained weights.\n",
" - A string corresponding to the available weight, for instance, [PyTorch resnet50](https://pytorch.org/vision/stable/models/generated/torchvision.models.resnet50.html) supports one\n",
"of the following strings: \\[\"*DEFAULT*\", \"*IMAGENET1K_V1*\", \"*IMAGENET1K_V2*\"\\].\n",
" - The same name as `architecture` which loads the network's default weights.\n",
" - The same name as `architecture`, which loads the network's default weights.\n",
"* `layers` determines the read-out (cut-off) layer(s). Which layers are available for each network\n",
"can be obtained by calling the `models.available_layers()` function.\n",
"\n",
Expand Down Expand Up @@ -272,7 +271,7 @@
}
],
"source": [
"architecture = 'resnet50' # networks' architecture\n",
"architecture = 'resnet50' # network's architecture\n",
"weights = 'resnet50' # the pretrained weights\n",
"layer = 'block0' # the readout layer\n",
"readout_kwargs = { # parameters for extracting features from the pretrained network\n",
Expand Down Expand Up @@ -431,7 +430,7 @@
}
],
"source": [
"architecture = 'resnet50' # networks' architecture\n",
"architecture = 'resnet50' # network's architecture\n",
"weights = 'resnet50' # the pretrained weights\n",
"img_size = 224 # network's input size\n",
"layer = 'block0' # the readout layer\n",
Expand Down Expand Up @@ -540,7 +539,7 @@
}
],
"source": [
"architecture = 'resnet50' # networks' architecture\n",
"architecture = 'resnet50' # network's architecture\n",
"weights = 'resnet50' # the pretrained weights\n",
"img_size = 224 # network's input size\n",
"layer = 'block0' # the readout layer\n",
Expand Down Expand Up @@ -629,7 +628,7 @@
}
],
"source": [
"architecture = 'resnet50' # networks' architecture\n",
"architecture = 'resnet50' # network's architecture\n",
"weights = 'resnet50' # the pretrained weights\n",
"img_size = 224 # network's input size\n",
"layer = 'block0' # the readout layer\n",
Expand Down
1 change: 0 additions & 1 deletion tests/models/readout_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,6 @@ def test_odd_one_out_net_loss_function():
def test_preprocess_transform():
# Test the preprocess_transform of BackboneNet
net = readout.BackboneNet(architecture='taskonomy_autoencoding', weights=None)
mean, std = net.normalise_mean_std

# Create a dummy input signal (replace this with your actual input)
input_signal = np.random.uniform(size=(224, 224, 3))
Expand Down

0 comments on commit c1eeb73

Please sign in to comment.