First, install anaconda from here (make sure to pick the python 3 version): Website. If you would like a smaller download, check out miniconda: Miniconda.

Next, we’re going to add some channels that we need for certain software:

conda config --add channels conda-forge # For ONNX/tensorboardX
conda config --add channels pytorch # For PyTorch

If you get an error saying that the “conda” command could not be found, make sure that anaconda is installed and your path is set correctly.

Clone and enter ReAgent repo:

git clone --recurse-submodules
cd ReAgent/

If you already cloned the repo without submodules, they can be added by running this command inside the repository”

git submodule update --init --recursive

Install dependencies:

conda install --file requirements.txt

Set JAVA_HOME to the location of your anaconda install

export JAVA_HOME="$(dirname $(dirname -- `which conda`))"

echo $JAVA_HOME # Should see something like "/home/jjg/miniconda3"

Install Spark (the mv command may need to be done as root):

tar -xzf spark-2.3.3-bin-hadoop2.7.tgz
sudo mv spark-2.3.3-bin-hadoop2.7 /usr/local/spark

Add the spark bin directory to your path so your terminal can find spark-submit:

export PATH=$PATH:/usr/local/spark/bin

Install OpenAI Gym if you plan on following our tutorial:

pip install "gym[classic_control,box2d,atari]"

Download libtorch from and extract it to $HOME/libtorch

As of pytorch 1.3, libtorch is broken on OS/X. To fix (mac only):

cp ~/miniconda3/lib/libiomp5.dylib $HOME/libtorch/lib/

And now, you are ready to install ReAgent itself. To install the serving platform:

mkdir serving/build
cd serving/build
cmake -DCMAKE_PREFIX_PATH=$HOME/libtorch ..

Next we must package the models. We use “pip install -e” on the root directory of the repository to create an ephemral package. This means that you can make changes to ReAgent and they will be reflected in the package immediately.

pip install -e .

At this point, you should be able to run all unit tests:



We have included a Dockerfile for the CPU-only build and CUDA build under the docker directory. The CUDA build will need nvidia-docker to run.

To build, clone repository and cd into the respective directory:

git clone
cd ReAgent/

On macOS you will need to increase the default memory allocation as the default of 2G is not enough. You can do this by clicking the whale icon in the task bar. We recommend using at least 8G of memory.

On macOS, you can then build the image:

docker build -f docker/cpu.Dockerfile -t horizon:dev .

On Linux you can build the image with specific memory allocations from command line:

docker build -f docker/cpu.Dockerfile -t horizon:dev --memory=8g --memory-swap=8g .

To build with CUDA support, use the corresponding dockerfile:

docker build -f docker/cuda.Dockerfile -t horizon:dev .

Once the Docker image is built you can start an interactive shell in the container and run the unit tests. To have the ability to edit files locally and have changes be available in the Docker container, mount the local ReAgent repo as a volume using the -v flag. We also add -p for port mapping so we can view Tensorboard visualizations locally.

docker run -v $PWD:/home/ReAgent -p -it horizon:dev

To run with GPU, include --runtime=nvidia after installing nvidia-docker.

docker run --runtime=nvidia -v $PWD:/home/ReAgent -p -it horizon:dev

If you have SELinux (Fedora, Redhat, etc.) you will have to start docker with the following command (notice the :Z at the end of path):

docker run -v $PWD:/home/ReAgent:Z -p -it horizon:dev

To run with GPU, include --runtime=nvidia after installing nvidia-docker.

docker run --runtime=nvidia -v $PWD:/home/ReAgent:Z -p -it horizon:dev

Depending on where your local ReAgent copy is, you may need to white list your shared path via Docker -> Preferences… -> File Sharing.

Once inside the container, run the setup file:

cd ReAgent

Now you can run all the tests:

python test

or try running one specific test:

python test -s ml.rl.test.constant_reward.test_constant_reward.TestConstantReward.test_trainer_maxq