Skip to content

Commit

Permalink
Add build doc
Browse files Browse the repository at this point in the history
  • Loading branch information
hcho3 committed Feb 7, 2019
1 parent 6464507 commit 9e08a2e
Show file tree
Hide file tree
Showing 5 changed files with 127 additions and 8 deletions.
15 changes: 8 additions & 7 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ include(cmake/Utils.cmake)
set_default_configuration_release()
msvc_use_static_runtime()

# Options
option(USE_CUDA "Build with CUDA" OFF)
option(USE_CUDNN "Build with CUDNN" OFF)
option(USE_TENSORRT "Build with Tensor RT" OFF)

# Use RPATH on Mac OS X as flexible mechanism for locating dependencies
# See https://blog.kitware.com/upcoming-in-cmake-2-8-12-osx-rpath-support/
set(CMAKE_MACOSX_RPATH TRUE)
Expand Down Expand Up @@ -56,11 +61,7 @@ FILE(GLOB_RECURSE DLR_INC
"${TREELITE_SRC}/include/*.h"
)

set(USE_CUDA OFF)
set(USE_CUDNN OFF)
set(USE_TENSORRT OFF)

if(USE_CUDA STREQUAL "ON")
if(USE_CUDA)
message("USING CUDA")
set(USE_CUDA "/usr/local/cuda-9.0")
set(CUDA_TOOLKIT_ROOT_DIR ${USE_CUDA})
Expand All @@ -84,7 +85,7 @@ if(USE_CUDA STREQUAL "ON")
file(GLOB RUNTIME_CUDA_SRCS ${TVM_SRC}/src/runtime/cuda/*.cc)
list(APPEND DLR_SRC ${RUNTIME_CUDA_SRCS})
endif()
if(USE_CUDNN STREQUAL "ON")
if(USE_CUDNN)
message("USING CUDNN")
set(USE_CUDNN ${USE_CUDA})
set(CUDNN_TOOLKIT_ROOT_DIR ${USE_CUDNN})
Expand All @@ -100,7 +101,7 @@ if(USE_CUDNN STREQUAL "ON")
file(GLOB CONTRIB_CUDNN_SRCS ${TVM_SRC}/src/contrib/cudnn/*.cc)
list(APPEND RUNTIME_SRCS ${CONTRIB_CUDNN_SRCS})
endif()
if(USE_TENSORRT STREQUAL "ON")
if(USE_TENSORRT)
message("USING TENSORRT")
set(USE_TENSORRT "/home/ubuntu/TensorRT-4.0.1.6")
set(TENSORRT_ROOT_DIR ${USE_TENSORRT})
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
DLR is a compact, common runtime for deep learning models and decision tree models compiled by [AWS SageMaker Neo](https://aws.amazon.com/sagemaker/neo/), [TVM](https://tvm.ai/), or [Treelite](https://treelite.readthedocs.io/en/latest/install.html). DLR uses the TVM runtime, Treelite runtime, NVIDIA TensorRT™, and can include other hardware-specific runtimes. DLR provides unified Python/C++ APIs for loading and running compiled models on various devices. DLR currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm coming soon.

## Documentation
For instructions on installilng DLR, please refer to [Installing DLR](https://neo-ai-dlr.readthedocs.io/en/latest/install.html)

For instructions on using DLR, please refer to [Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere](https://aws.amazon.com/blogs/aws/amazon-sagemaker-neo-train-your-machine-learning-models-once-run-them-anywhere/)

Expand Down
2 changes: 1 addition & 1 deletion doc/Doxyfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ CASE_SENSE_NAMES = NO
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
PREDEFINED =
PREDEFINED = DLR_DLL=
INPUT = ../include/ ../src/
EXAMPLE_PATH = ../
RECURSIVE = YES
Expand Down
1 change: 1 addition & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Contents
:maxdepth: 2
:titlesonly:

install
python-api
c-api
Internal docs <http://neo-ai-dlr.readthedocs.io/en/latest/dev/>
116 changes: 116 additions & 0 deletions doc/install.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
##############
Installing DLR
##############

.. contents:: Contents
:local:
:backlinks: none

************************
Building DLR from source
************************

Building DLR consists of two steps:

1. Build the shared library from C++ code (``libdlr.so`` for Linux, ``libdlr.dylib`` for Mac OSX, and ``dlr.dll`` for Windows).
2. Then install the Python package ``dlr``.

.. note:: Use of Git submodules

DLR uses Git submodules to manage dependencies. So when you clone the repo, remember to specify ``--recursive`` option:

.. code-block:: bash
git clone --recursive https://github.com/neo-ai/neo-ai-dlr
Building on Linux
=================

Ensure that all necessary software packages are installed: GCC (or Clang), CMake, and Python. For example, in Ubuntu, you can run

.. code-block:: bash
sudo apt-get update
sudo apt-get install -y python3 python3-pip gcc build-essential cmake
To build, create a subdirectory ``build`` and invoke CMake:

.. code-block:: bash
mkdir build
cd build
cmake ..
Once CMake is done generating a Makefile, run GNU Make to compile:

.. code-block:: bash
make -j4 # Use 4 cores to compile sources in parallel
By default, DLR will be built with CPU support only. To enable support for NVIDIA GPUs, enable CUDA, CUDNN, and TensorRT by calling CMake with extra options:

.. code-block:: bash
cmake .. -DUSE_CUDA=ON -DUSE_TENSORRT=ON -DUSE_CUDNN=ON
make -j4
You will need to install NVIDIA CUDA toolkits and drivers beforehand.

Once the compilation is completed, install the Python package by running ``setup.py``:

.. code-block:: bash
cd python
python3 setup.py install
Building on Mac OS X
====================

Install GCC and CMake from `Homebrew <https://brew.sh/>`_:

.. code-block:: bash
brew update
brew install cmake gcc@8
To ensure that Homebrew GCC is used (instead of default Apple compiler), specify environment variables ``CC`` and ``CXX`` when invoking CMake:

.. code-block:: bash
mkbir build
cd build
CC=gcc-8 CXX=g++-8 cmake ..
make -j4
NVIDIA GPUs are not supported for Mac OS X target.

Once the compilation is completed, install the Python package by running ``setup.py``:

.. code-block:: bash
cd python
python3 setup.py install
Building on Windows
===================

DLR requires `Visual Studio 2017 <https://visualstudio.microsoft.com/downloads/>`_ as well as `CMake <https://cmake.org/>`_.

In the DLR directory, first run CMake to generate a Visual Studio project:

.. code-block:: cmd
mkdir build
cd build
cmake .. -G"Visual Studio 15 2017 Win64"
If CMake run was successful, you should be able to find the solution file ``dlr.sln``. Open it with Visual Studio. To build, choose **Build Solution** on the **Build** menu.

NVIDIA GPU is not yet supported for Windows target.

Once the compilation is completed, install the Python package by running ``setup.py``:

.. code-block:: bash
cd python
python3 setup.py install

0 comments on commit 9e08a2e

Please sign in to comment.