docs/content/guides/developer/accessing-data/custom-indexer/build.mdx
This guide provide a practical example for building a custom indexer. Refer to Custom Indexing Framework and Indexer Pipeline Architecture for a conceptual overview of the indexer framework.
To build a complete custom indexer, use the sui-indexer-alt-framework. The steps that follow demonstrate how to create a sequential pipeline that extracts transaction digests from Sui checkpoints and stores them in a local PostgreSQL. You can find the source code for the framework in the Sui repo on GitHub.
:::tip
While this example uses PostgreSQL with Diesel (a popular Rust ORM and query builder) for minimalism and out-of-the-box support, the sui-indexer-alt-framework is designed for flexible storage. You can use different databases (such as MongoDB, CouchDB, or similar) or utilize other database clients if you prefer not to use Diesel. To achieve this, implement the framework's Store and Connection traits and define your database write logic directly within your Handler::commit() method.
:::
<Tabs className="tabsHeadingCentered--small"> <TabItem value="prereq" label="Prerequisites"> <details className="nudge-details"> <summary>Check installation
</summary>If you're unsure whether your system has the necessary software properly installed, you can verify installation with the following commands.
$ psql --version
$ diesel --version
The following steps show how to create an indexer that:
In the end, you have a working indexer that demonstrates all core framework concepts and can serve as a foundation for more complex custom indexers.
:::info
Sui provides checkpoint stores for both Mainnet and Testnet.
https://checkpoints.testnet.sui.iohttps://checkpoints.mainnet.sui.io:::
##step Project setup
First, open your console to the directory you want to store your indexer project. Use the cargo new command to create a new Rust project and then navigate to its directory.
$ cargo new simple-sui-indexer
$ cd simple-sui-indexer
##step Configure dependencies
Replace your Cargo.toml code with the following configuration and save.
The manifest now includes the following dependencies:
sui-indexer-alt-framework: Core framework providing pipeline infrastructure.diesel/diesel-async: Type-safe database ORM with asynchronous support.tokio: Async runtime required by the framework.clap: Command-line argument parsing for configuration.anyhow: Error handling and async-trait for trait implementations.dotenvy: Ingest .env file that stores your PostgreSQL URL.##step Create database
Before configuring migrations, create and verify your local PostgreSQL database:
$ createdb sui_indexer
Get your connection details:
$ psql sui_indexer -c "\conninfo"
If successful, your console should display a message similar to the following:
You are connected to database "sui_indexer" as user "username" via socket in "/tmp" at port "5432".
If you receive a createdb error similar to
createdb: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: FATAL: role "username" does not exist
This means you need to create the user (replace username with the name provided in your error message).
$ sudo -u postgres createuser --superuser username
Enter the password for your pgAdmin account when prompted, then try the createdb command again.
You can now set a variable to your database URL as it's used in following commands. Make sure to change username to your actual username.
$ PSQL_URL=postgres://username@localhost:5432/sui_indexer
You can now test your connection with the following command:
$ psql $PSQL_URL -c "SELECT 'Connected';"
If successful, your console or terminal should respond with a message similar to the following:
?column?
-----------
Connected
(1 row)
##step Database setup
Before you start coding, make sure you set up a local PostgreSQL database from the previous step. This is required for the indexer to store the extracted transaction data.
The following database setup steps have you:
###substep Configure Diesel
First, create a diesel.toml file (within the same folder as cargo.toml) to configure database migrations.
$ touch diesel.toml
Update and save the file with the following code:
<ImportContent source="examples/rust/basic-sui-indexer/diesel.toml" mode="code" />###substep Create database table using Diesel migrations
Diesel migrations are a way of creating and managing database tables using SQL files. Each migration has two files:
up.sql: Creates and changes the table.down.sql: Removes and undoes the changes.Use the diesel setup command to create the necessary directory structure, passing your database URL with the --database-url argument.
$ diesel setup --database-url $PSQL_URL
Use the diesel migration command at the root of your project to then generate the migration files.
$ diesel migration generate transaction_digests
You should now have a migrations folder in your project. There should be a subdirectory in this folder with the name format YYYY-MM-DD-HHMMSS_transaction_digests. This folder should contain the up.sql and down.sql files.
Open up.sql and replace its contents with the following code (using the actual folder name):
:::tip
This example uses the TEXT data type for tx_digest, but best practice for a production indexer is to use the BYTEA data type.
The TEXT type is used to make the transaction digest easily readable and directly usable with external tools. Digests are Base58 encoded, and because PostgreSQL cannot natively display BYTEA data in this format, storing it as TEXT allows you to copy the digest from a query and paste it into an explorer like SuiScan.
For a production environment, however, BYTEA is strongly recommended. It offers superior storage and query efficiency by storing the raw byte representation, which is more compact and significantly faster for comparisons than a string. Refer to Binary data performance in PostgreSQL on the CYBERTEC website for more information.
:::
Save up.sql, then open down.sql to edit. Replace the contents of the file with the following code and save it:
###substep Apply migration and generate Rust schema
From the root of your project, use the diesel migration command to create tables.
$ diesel migration run --database-url $PSQL_URL
Then use the diesel print-schema command to generate the schema.rs file from the actual database.
$ diesel print-schema --database-url $PSQL_URL > src/schema.rs
Your src/schema.rs file should now look like the following:
After running the previous commands, your project is set up for the next steps:
transaction_digests table with the defined columns.src/schema.rs contains automatically generated Rust code that represents this table structure.The Diesel's migration system evolves the database schema over time in a structured and version-controlled way. For a complete walkthrough, see the official Diesel Getting Started guide.
##step Create data structure
To simplify writes to Diesel, you can define a struct that represents a record on the transaction_digests table.
Key annotations:
FieldCount: Required by sui-indexer-alt-framework for memory optimization and batch processing efficiency. It is used to limit the max size of a batch so that we don't exceed the postgres limit on the number of bind parameters a single SQL statement can have.diesel(table_name = transaction_digests): Maps this Rust struct to the transaction_digests table, whose schema is generated in a previous step.Insertable: Allows this struct to be inserted into the database using Diesel.##step Define the Handler struct in handler.rs
Create a handlers.rs file in your src directory.
$ touch ./src/handlers.rs
Open the file and define a concrete struct to implement the Processor and Handler traits:
Save the file but keep it open as the next steps add to its code.
##step Implement the Processor
The Processor trait defines how to extract and transform data from checkpoints. The resulting data is then passed to Handler::commit.
Add the necessary dependencies at the top of the file.
<ImportContent source="examples/rust/basic-sui-indexer/src/handlers.rs" mode="code" tag="processordeps" />After the TransactionDigestHandler struct, add the Processor code:
Key concepts:
NAME: Unique identifier for this processor used in monitoring and logging.type Value: Defines what data flows through the pipeline, which ensures type safety.process(): Core logic that transforms checkpoint data into your custom data structure.Save the handlers.rs file.
Processor trait definition
</summary> <ImportContent source="crates/sui-indexer-alt-framework/src/pipeline/processor.rs" mode="code" trait="Processor" /> </details>##step Implement the Handler
The Handler trait defines how to commit data to the database. Append the Handler dependencies to bottom of the dependency list you created in the previous step.
Add the logic for Handler after the Processor code. The complete code is available at the end of this step.
How sequential batching works:
process() returns values for each checkpoint.batch() accumulates values from multiple checkpoints.commit() writes the batch when framework reaches limits (H::MAX_BATCH_CHECKPOINTS).:::tip
You can override the default batch limits by implementing constants in your Handler.
:::
The handlers.rs file is now complete. Save the file.
Complete handler.rs file
Handler trait definition
</summary> <ImportContent source="crates/sui-indexer-alt-framework/src/pipeline/sequential/committer.rs" mode="code" trait="Handler" /> </details>##step Create .env file
The main function you create in the next step needs the value you stored to the shell variable $PSQL_URL. To make it available, create a .env file with that data.
echo "DATABASE_URL=$PSQL_URL" > .env
echo "DATABASE_URL=$PSQL_URL" > .env
"DATABASE_URL=$env:PSQL_URL" | Out-File -Encoding UTF8 .env
After running the command for your environment, make sure the .env file exists at your project root with the correct data.
##step Create main function
Now, to tie everything together in the main function, open your main.rs file. Replace the default code with the following and save the file:
Key components explained:
embed_migrations!: Includes your migration files in the binary so the indexer automatically updates the database schema on startup.Args::parse(): Provides command-line configuration like --first-checkpoint, --remote-store-url, and so on.IndexerCluster::builder(): Sets up the framework infrastructure (database connections, checkpoint streaming, monitoring).sequential_pipeline(): Registers a sequential pipeline that processes checkpoints in order with smart batching.SequentialConfig::default(): Uses framework defaults for batch sizes and checkpoint lag (how many checkpoints to batch together).cluster.run(): Starts processing checkpoints and blocks until completion.Your indexer is now complete. The next steps walk you through running the indexer and checking its functionality.
##step Run your indexer
Use the cargo run command to run your indexer against Testnet. You can choose from multiple checkpoint data sources (see Checkpoint Data Sources for details):
Using remote checkpoint storage:
$ cargo run -- --remote-store-url https://checkpoints.testnet.sui.io
Using gRPC streaming from a full node (with remote checkpoint store fallback):
$ cargo run -- --remote-store-url https://checkpoints.testnet.sui.io --streaming-url https://fullnode.testnet.sui.io:443
:::info
Allow incoming network requests if your operating system requests it for the basic-sui-indexer application.
:::
If successful, your console informs you that the indexer is running.
##step Verify results
Open a new terminal or console and connect to your database to check the results:
$ psql sui_indexer
After connecting, run a few queries to verify your indexer is working:
Check how many transaction digests are indexed:
$ SELECT COUNT(*) FROM transaction_digests;
View sample records:
$ SELECT * FROM transaction_digests LIMIT 5;
To confirm your data is accurate, copy any transaction digest from your database and verify it on SuiScan: https://suiscan.xyz/testnet/home
You've built a working custom indexer. 🎉
The key concepts covered here apply to any custom indexer: define your data structure, implement the Processor and Handler traits, and let the framework handle the infrastructure.