1. Click the Run Configuration button (located next to the Run button).
2. Select the relevant package manager for your dependency.
3. Type the name of the package in the text box and click Add.
4. Repeat steps 2 & 3 for all of your algorithm dependencies.
Your custom run environment will be built the next time your run your algorithm, and the result will be cached (so don't be surprised if it doesn't happen every time!). The output from the build phase is kept as part of the run's results in the SetupLog file.
Code Ocean supports algorithms utilizing the NVIDIA CUDA platform. The Caffe, Theano, TensorFlow, and Torch frameworks are currently available and others can be added on request - just open a support ticket or email us at firstname.lastname@example.org. To run your algorithm with GPU support:
1. Create a new algorithm. You'll need to choose one of the CUDA-supporting languages for your algorithm - either C++, Python or Lua.
2. Click the Run Configuration button (right next to the Run button).
3. Select one of the CUDA-based environment from the Base Environment dropdown.
4. That's it! Your algorithm will now automatically execute on one of our GPU machines.
You can restore your source files to the version that generated any of your previous runs by clicking the Restore icon in the results pane. Your source files will be restored according to the run you selected. Input files and dependencies will not be affected.