Skip to content

Commit

Permalink
modified: docs/README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
vsoch committed Oct 24, 2017
1 parent 56abfee commit 94d4ac3
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 39 deletions.
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ Reasonable updates would be:

- to add a DICOM receiver directly to the application using `pynetdicom3`, so instead of listening for datasets on the filesystem, we can receive them directly.

## Preparation
The base of the image is distributed via [sendit-base](scripts/docker/README.md). This image has all dependencies for the base so we can easily bring the image up and down.

## Deployment

- [Setup](setup.md): Basic setup (download and install) of a new application for a server.
Expand Down
29 changes: 2 additions & 27 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,45 +72,20 @@ Note that the fields for `ENTITY_ID` and `ITEM_ID` are set to the default of [de
## Storage
The next set of variables are specific to [storage](storage.md), which is the final step in the pipeline.

```
# We can turn on/off send to Orthanc. If turned off, the images would just be processed
SEND_TO_ORTHANC=True
# The ipaddress of the Orthanc server to send the finished dicoms (cloud PACS)
ORTHANC_IPADDRESS="127.0.0.1"
# The port of the same machine (by default they map it to 4747
ORTHAC_PORT=4747
```

Since the Orthanc is a server itself, if we are ever in need of a way to quickly deploy and bring down these intances as needed, we could do that too, and the application would retrieve the ipaddress programatically.

And I would (like) to eventually add the following, meaning that we also send datasets to Google Cloud Storage and Datastore, ideally in compressed nifti instead of dicom, and with some subset of fields. These functions are by default turned off.

```
# Should we send to Google at all?
SEND_TO_GOOGLE=False
SEND_TO_GOOGLE=True
# Google Cloud Storage Bucket (must be created)
GOOGLE_CLOUD_STORAGE='radiology'
GOOGLE_STORAGE_COLLECTION=None # define here or in your secrets
GOOGLE_PROJECT_NAME="project-name" # not the id, usually the end of the url in Google Cloud
```

Note that the storage collection is set to None, and this should be the id of the study (eg, the IRB). If this is set to None, it will not upload. Finally, to add a special header to signify a Google Storage project, you should add the name of the intended project to your header:

```
GOOGLE_PROJECT_ID_HEADER="12345"
# Will produce this key/value header
x-goog-project-id: 12345
```

** Note we aren't currently using this header and it works fine.
Note that the storage collection is set to None, and this should be the id of the study (eg, the IRB). If this is set to None, it will not upload.

Note that this approach isn't suited for having more than one study - when that is the case, the study will likely be registered with the batch. Importantly, for the above, there must be a `GOOGLE_APPLICATION_CREDENTIALS` filepath exported in the environment, or it should be run on a Google Cloud Instance (unlikely).


## Authentication
If you look in [sendit/settings/auth.py](../sendit/settings/auth.py) you will see something called `lockdown` and that it is turned on:

Expand Down
8 changes: 3 additions & 5 deletions docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,9 @@ This document will review basic setup of the sendit application. You will need r


## Download
Before you start, you should make sure that you have Docker and docker-compose installed, and a complete script for setting up the dependencies for any instance [is provided](scripts/prepare_instance.sh).
Before you start, you should make sure that you have Docker and docker-compose installed, and a complete script for setting up the dependencies for any instance [is provided](scripts/prepare_instance.sh). We basically install docker-compose, docker, and download this repository to an install base.

Importantly, note that we build the image in the above. This is important to take note of (meaning the image isn't served on Docker Hub) because certificates are generated during the build.

You should walk through this carefully to make sure everything completes, and importantly, to install docker you will need to log in and out. You should then clone the repo, and we recommend a location like `/opt`.
You should walk through this carefully to make sure everything completes, and importantly, to install docker you will need to log in and out. The last steps in the preparation are to clone the repo, and we recommend a location like `/opt`.

```
cd /opt
Expand Down Expand Up @@ -37,4 +35,4 @@ uwsgi:
```


You should next [configure](config.md) your application.
You should next [configure](config.md) your application before building the image.
3 changes: 0 additions & 3 deletions scripts/prepare_instance.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,4 @@ if [ ! -d $INSTALL_ROOT/sendit ]
then
cd $INSTALL_ROOT
git clone https://www.github.com/pydicom/sendit.git
cd sendit
docker build -t vanessa/sendit .
docker-compose up -d
fi
5 changes: 1 addition & 4 deletions sendit/apps/main/tasks/finish.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,8 @@
)

from sendit.settings import (
SEND_TO_ORTHANC,
SEND_TO_GOOGLE,
SOM_STUDY,
ORTHANC_IPADDRESS,
ORTHANC_PORT
SOM_STUDY
)

from retrying import retry
Expand Down

0 comments on commit 94d4ac3

Please sign in to comment.