Website | Source | Tutorial

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI libraries for simplifying ML compute:

Learn more about Ray AI Libraries:

Or more about Ray Core and its key abstractions:

Learn more about Monitoring and Debugging:

Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.

Install Ray with: pip install ray. For nightly wheels, see the Installation page.

Why Ray?

Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.

Ray is a unified way to scale Python and AI applications from a laptop to a cluster.

With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.

More Information

Older documents:

Libraries

New Libraries

This section contains libraries that are well-made and useful, but have not necessarily been battle-tested by a large userbase yet.

Models and Projects

Ray + LLM

Reinforcement Learning

Ray Data (Data Processing)

Ray Train (Distributed Training)

Ray Tune (Hyperparameter Optimization)

Ray Serve (Model Serving)

Ray + JAX / TPU

Ray + Database

Ray + X (integration)

Ray-Project

distributed computing

Ray AIR

Cloud Deployment

Misc

Videos

Anyscale Academy & Official Tutorials

Conference Talks

RLlib

Papers

This section contains papers focused on Ray (e.g. RAY-based library whitepapers, research on RAY, etc). Papers implemented in RAY are listed in the Models/Projects section.

Foundational Papers

Tutorials and Blog Posts

2024-2025

Earlier Resources

books

course

cheatsheet


Tags: ai   distribution  

Last modified 20 January 2026