doc/source/ray-overview/index.md
(overview-overview)=
Ray is an open-source unified framework for scaling AI and Python applications like machine learning. It provides the compute layer for parallel processing so that you don’t need to be a distributed systems expert. Ray minimizes the complexity of running your distributed individual workflows and end-to-end machine learning workflows with these components:
For data scientists and machine learning practitioners, Ray lets you scale jobs without needing infrastructure expertise:
For ML platform builders and ML engineers, Ray:
For distributed systems engineers, Ray automatically handles key processes:
These are some common ML workloads that individuals, organizations, and companies leverage Ray to build their AI applications:
| Stack of Ray libraries - unified toolkit for ML workloads. |
Ray's unified compute framework consists of three layers:
.. grid:: 1 2 3 3
:gutter: 1
:class-container: container pb-3
.. grid-item-card::
**Scale machine learning workloads**
^^^
Build ML applications with a toolkit of libraries for distributed
:doc:`data processing <../data/data>`,
:doc:`model training <../train/train>`,
:doc:`tuning <../tune/index>`,
:doc:`reinforcement learning <../rllib/index>`,
:doc:`model serving <../serve/index>`,
and :doc:`more <../ray-more-libs/index>`.
+++
.. button-ref:: libraries-quickstart
:color: primary
:outline:
:expand:
Ray AI Libraries
.. grid-item-card::
**Build distributed applications**
^^^
Build and run distributed applications with a
:doc:`simple and flexible API <../ray-core/walkthrough>`.
:doc:`Parallelize <../ray-core/walkthrough>` single machine code with
little to zero code changes.
+++
.. button-ref:: ../ray-core/walkthrough
:color: primary
:outline:
:expand:
Ray Core
.. grid-item-card::
**Deploy large-scale workloads**
^^^
Deploy workloads on :doc:`AWS, GCP, Azure <../cluster/getting-started>` or
:doc:`on premise <../cluster/vms/user-guides/launching-clusters/on-premises>`.
Use Ray cluster managers to run Ray on existing
:doc:`Kubernetes <../cluster/kubernetes/index>`,
:doc:`YARN <../cluster/vms/user-guides/community/yarn>`,
or :doc:`Slurm <../cluster/vms/user-guides/community/slurm>` clusters.
+++
.. button-ref:: ../cluster/getting-started
:color: primary
:outline:
:expand:
Ray Clusters
Each of Ray's five native libraries distributes a specific ML task:
Ray's libraries are for both data scientists and ML engineers. For data scientists, these libraries can be used to scale individual workloads and end-to-end ML applications. For ML engineers, these libraries provide scalable platform abstractions that can be used to easily onboard and integrate tooling from the broader ML ecosystem.
For custom applications, the Ray Core library enables Python developers to easily build scalable, distributed systems that can run on a laptop, cluster, cloud, or Kubernetes. It's the foundation that Ray AI libraries and third-party integrations (Ray ecosystem) are built on.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.