nx/guides/getting_started/installation.md
The only prerequisite for installing Nx is Elixir itself. If you don't have Elixir installed in your machine you can visit this installation page.
There are several ways to install Nx (Numerical Elixir), depending on your project type and needs.
If you are working inside a Mix project, the recommended way to install Nx is by adding it to your mix.exs dependencies:
defp deps do
[
{:nx, "~> 0.9"} # Install the latest stable version
]
end
mix deps.get
If you need the latest, unreleased features, install Nx directly from the GitHub repository.
mix.exs:defp deps do
[
{:nx, github: "elixir-nx/nx", branch: "main", sparse: "nx"}
]
end
mix deps.get
If you don’t have a Mix project and just want to run a standalone script, use Mix.install/1 to dynamically fetch and install Nx.
Mix.install([:nx])
require Nx
tensor = Nx.tensor([1, 2, 3])
IO.inspect(tensor)
Run the script with:
elixir my_script.exs
Best for: Quick experiments, small scripts, or one-off computations.
To use the latest development version in a script (without a Mix project):
Mix.install([
{:nx, github: "elixir-nx/nx", branch: "main", sparse: "nx"}
])
require Nx
tensor = Nx.tensor([1, 2, 3])
IO.inspect(tensor)
Run:
elixir my_script.exs
Best for: Trying new features from Nx without creating a full project.
To enable GPU/TPU acceleration with Google’s XLA backend, install Nx along with EXLA:
defp deps do
[
{:nx, "~> 0.9"},
{:exla, "~> 0.9"} # EXLA (Google XLA Backend)
]
end
mix deps.get
Nx.default_backend(EXLA.Backend)
Nx.Defn.default_options(compiler: EXLA)
Best for: Running Nx on GPUs or TPUs using Google’s XLA compiler.
To run Nx operations on PyTorch’s backend (LibTorch):
defp deps do
[
{:nx, "~> 0.9"},
{:torchx, "~> 0.9"} # PyTorch Backend
]
end
mix deps.get
Nx.default_backend(Torchx.Backend)
Best for: Deep learning applications with PyTorch acceleration.