Last updated September 26, 2023
Table of Contents

Ray is an open-source distributed computing framework primarily designed for building and scaling distributed applications. It was developed by the UC Berkeley RISELab and is widely used in various industries, including machine learning, artificial intelligence, data processing, and more. Ray provides a powerful and flexible platform for efficiently managing distributed workloads, making it easier to harness the full potential of modern computing infrastructure.

Each of Ray’s five native libraries distributes a specific ML task:

  • Data: Scalable, framework-agnostic data loading and transformation across training, tuning, and prediction.

  • Train: Distributed multi-node and multi-core model training with fault tolerance that integrates with popular training libraries.

  • Tune: Scalable hyperparameter tuning to optimize model performance.

  • Serve: Scalable and programmable serving to deploy models for online inference, with optional microbatching to improve performance.

  • RLlib: Scalable distributed reinforcement learning workloads.

To install Ray libraries, we recommend creating a Conda environment and then installing the specific library you need based on the Ray documentation. See Building a Customized Conda Environment for more information.

0.0.1 Additional resources

If you have questions about or need help with Ray, please submit a help ticket and we will assist you.