SETUP.md
The repo, including this guide, is tested on Linux. Where applicable, we document differences in Windows and MacOS although such documentation may not always be up to date.
In addition to the pip installable package, several extras are provided, including:
[gpu]: Needed for running GPU models.[spark]: Needed for running Spark models.[dev]: Needed for development.[all]: [gpu]|[spark]|[dev][experimental]: Models that are not thoroughly tested and/or may require additional steps in installation).Follow the Getting Started section in the README to install the package and run the examples.
# 1. Make sure CUDA is installed.
# 2. Follow Steps 1-5 in the Getting Started section in README.md to install the package and Jupyter kernel, adding the gpu extra to the pip install command:
pip install recommenders[gpu]
# 3. Within VSCode:
# a. Open a notebook with a GPU model, e.g., examples/00_quick_start/wide_deep_movielens.ipynb;
# b. Select Jupyter kernel <kernel_name>;
# c. Run the notebook.
# 1. Make sure JDK is installed. For example, OpenJDK 11 can be installed using the command
# sudo apt-get install openjdk-11-jdk
# 2. Follow Steps 1-5 in the Getting Started section in README.md to install the package and Jupyter kernel, adding the spark extra to the pip install command:
pip install recommenders[spark]
# 3. Within VSCode:
# a. Open a notebook with a Spark model, e.g., examples/00_quick_start/als_movielens.ipynb;
# b. Select Jupyter kernel <kernel_name>;
# c. Run the notebook.
The following instructions were tested on Databricks Runtime 15.4 LTS (Apache Spark version 3.5.0), 14.3 LTS (Apache Spark version 3.5.0), 13.3 LTS (Apache Spark version 3.4.1), and 12.2 LTS (Apache Spark version 3.3.2). We have tested the runtime on python 3.9,3.10 and 3.11.
After an Databricks cluster is provisioned:
# 1. Go to the "Compute" tab on the left of the page, click on the provisioned cluster and then click on "Libraries".
# 2. Click the "Install new" button.
# 3. In the popup window, select "PyPI" as the library source. Enter "recommenders[examples]" as the package name. Click "Install" to install the package.
# 4. Now, repeat the step 3 for below packages:
# a. numpy<2.0.0
# b. pandera<=0.18.3
# c. scipy<=1.13.1
This repository includes an end-to-end example notebook that uses Azure Databricks to estimate a recommendation model using matrix factorization with Alternating Least Squares, writes pre-computed recommendations to Azure Cosmos DB, and then creates a real-time scoring service that retrieves the recommendations from Cosmos DB. In order to execute that notebook, you must install the Recommenders repository as a library (as described above), AND you must also install some additional dependencies. With the Quick install method, you just need to pass an additional option to the installation script.
<details> <summary><strong><em>Quick install</em></strong></summary>This option utilizes the installation script to do the setup. Just run the installation script
with an additional option. If you have already run the script once to upload and install the Recommenders.egg library, you can also add an --overwrite option:
python tools/databricks_install.py --overwrite --prepare-o16n <CLUSTER_ID>
This script does all of the steps described in the Manual setup section below.
</details> <details> <summary><strong><em>Manual setup</em></strong></summary>You must install three packages as libraries from PyPI:
azure-cli==2.0.56azureml-sdk[databricks]==1.0.8pydocumentdb==2.3.3You can follow instructions here for details on how to install packages from PyPI.
Additionally, you must install the spark-cosmosdb connector on the cluster. The easiest way to manually do that is to:
3.1.X, and is the appropriate version for the recommended Azure Databricks run-time detailed above. See the Databricks installation script for other Databricks runtimes.Azure Databricks workspaceClusters button on the left.Upload and Jar options, and click in the box that has the text Drop JAR here in it..jar file, select it, and click Open.Install.The xlearn package has dependency on cmake. If one uses the xlearn related notebooks or scripts, make sure cmake is installed in the system. The easiest way to install on Linux is with apt-get: sudo apt-get install -y build-essential cmake. Detailed instructions for installing cmake from source can be found here.
For Spark features to work, make sure Java and Spark are installed and respective environment varialbes such as JAVA_HOME, SPARK_HOME and HADOOP_HOME are set properly. Also make sure environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are set to the the same python executable.
We recommend using Homebrew to install system dependencies on macOS. One may also need to install lightgbm using Homebrew before pip install the package.
To install uv on macOS:
curl -LsSf https://astral.sh/uv/install.sh | sh
If zsh is used, one will need to use uv pip install 'recommenders[<extras>]' to install <extras>.
For Spark features to work, make sure Java and Spark are installed first. Also make sure environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are set to the the same python executable.
If you want to contribute to Recommenders, please first read the Contributing Guide. You will notice that our development branch is staging.
To start developing, you need to install the latest staging branch in local, the dev package, and any other package you want. For example, for starting developing with GPU models, you can use the following command:
git checkout staging
pip install -e .[dev,gpu]
You can decide which packages you want to install, if you want to install all of them, you can use the following command:
git checkout staging
pip install -e .[all]
We also provide a devcontainer.json and Dockerfile for developers to facilitate the development on Dev Containers with VS Code and GitHub Codespaces.
<details> <summary><strong><em>VS Code Dev Containers</em></strong></summary>The typical scenario using Docker containers for development is as follows. Say, we want to develop applications for a specific environment, so
To use VS Code Dev Containers, your local machine must have the following applicatioins installed:
Then
devcontainer.json, install a VS Code server in the container,
and mount the folder into the container.
devcontainer.json, and clone the specified branch of Recommenders
into the container.Once everything is set up, VS Code will act as a client to the server in the container, and all subsequent operations on VS Code will be performed against the container.
</details> <details> <summary><strong><em>GitHub Codespaces</em></strong></summary>GitHub Codespaces also uses devcontainer.json and Dockerfile in the
repo to create the environment on a VM for you to develop on the Web
VS Code. To use the GitHub Codespaces on Recommenders, you can go to
Recommenders
$\to$ switch to the branch of interest $\to$ Code $\to$ Codespaces
$\to$ Create codespaces on the branch.
devcontainer.json describes:
COMPUTE and PYTHON_VERSION.postCreateCommandDockerfile is used in 3 places:
Depending on the type of recommender system and the notebook that needs to be run, there are different computational requirements.
Currently, tests are done on Python CPU (the base environment), Python GPU (corresponding to [gpu] extra above) and PySpark (corresponding to [spark] extra above).
Another way is to build a docker image and use the functions inside a docker container.
The process of making a new release and publishing it to PyPI is as follows:
First make sure that the tag that you want to add, e.g. 0.6.0, is added in recommenders.py/__init__.py. Follow the contribution guideline to add the change.
git tag -a 0.6.0 -m "Recommenders 0.6.0".git push origin 0.6.0.pip install twinetwine upload recommenders*