Back to Graphrag

Copyright (c) 2024 Microsoft Corporation.

docs/examples_notebooks/api_overview.ipynb

3.0.93.1 KB
Original Source
python
# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License.

API Overview

This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations.

python
from pathlib import Path
from pprint import pprint

import graphrag.api as api
import pandas as pd
from graphrag.config.load_config import load_config
from graphrag.index.typing.pipeline_run_result import PipelineRunResult
python
PROJECT_DIRECTORY = "<your project directory>"

Prerequisite

As a prerequisite to all API operations, a GraphRagConfig object is required. It is the primary means to control the behavior of graphrag and can be instantiated from a settings.yaml configuration file.

Please refer to the CLI docs for more detailed information on how to generate the settings.yaml file.

Generate a GraphRagConfig object

python
# note that we expect this to fail on the deployed docs because the PROJECT_DIRECTORY is not set to a real location.
# if you run this notebook locally, make sure to point at a location containing your settings.yaml
graphrag_config = load_config(Path(PROJECT_DIRECTORY))

Indexing API

Indexing is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (.txt) and .csv file formats.

Build an index

python
index_result: list[PipelineRunResult] = await api.build_index(config=graphrag_config)

# index_result is a list of workflows that make up the indexing pipeline that was run
for workflow_result in index_result:
    status = f"error\n{workflow_result.errors}" if workflow_result.errors else "success"
    print(f"Workflow Name: {workflow_result.workflow}\tStatus: {status}")

Query an index

To query an index, several index files must first be read into memory and passed to the query API.

python
entities = pd.read_parquet(f"{PROJECT_DIRECTORY}/output/entities.parquet")
communities = pd.read_parquet(f"{PROJECT_DIRECTORY}/output/communities.parquet")
community_reports = pd.read_parquet(
    f"{PROJECT_DIRECTORY}/output/community_reports.parquet"
)

response, context = await api.global_search(
    config=graphrag_config,
    entities=entities,
    communities=communities,
    community_reports=community_reports,
    community_level=2,
    dynamic_community_selection=False,
    response_type="Multiple Paragraphs",
    query="Who is Scrooge and what are his main relationships?",
)

The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.

python
print(response)

Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).

python
pprint(context)  # noqa: T203