apps/blog/content/blog/performance-benchmarks-comparing-query-latency-across-typescript-orms-and-databases/index.mdx
We have created open-source performance benchmarks to compare query latencies for Prisma ORM, TypeORM and Drizzle ORM with different database providers such as PostgreSQL on AWS RDS, Supabase and Neon. Read on to learn about our methodology and which TypeScript ORM is the fastest.
Selecting the best ORM for your application involves considering several factors, with query performance being a significant one.
To assist you in deciding which ORM to use for your TypeScript app, we have created open-source performance benchmarks comparing the query performance of three ORM libraries: Prisma ORM, TypeORM, and Drizzle ORM (using their Query API).
So, which ORM is the fastest? The (maybe unsatisfying) answer: It depends!
Based on the data we've collected, it's not possible to conclude that one ORM always performs better than the other. Instead, it depends on the respective query, dataset, schema, and the infrastructure on which the query is executed.
Check out our performance checklist below to ensure optimal performance for your Prisma ORM queries.
Measuring and comparing query performance can be a daunting task and there's a lot of factors to consider for creating a fair comparison.
With our benchmarks, we wanted to strike a balance between ensuring that the benchmarks are fair and meaningful to help people make an informed decision on which ORM to use for their next project while keeping things simple and easy to understand without too many layers of indirection or data processing.
You can find the application that we used to measure query performance here.
We have created 14 equivalent queries for Prisma ORM, TypeORM and Drizzle ORM. The queries are sent against a schema with 4 Prisma models (5 tables because of one implicit m-n relation which uses an additional relation table under the hood).
The query latency for each query is measured via performance.now() using this function:
export default async function measure(label: string, query: any) {
const startTime = performance.now();
await query; // execute the query (i.e. resolve the promise)
const endTime = performance.now();
// calculate the elapsed time
const elapsedTime = endTime - startTime;
return {
query: label,
time: elapsedTime,
};
}
For example, the plain findMany query from Prisma ORM can be measured as follows:
const query = prisma.customer.findMany(); // unresolved promise
const label = "prisma-findMany";
await measure(label, query);
There is one script per ORM per database (e.g. prisma-postgres.ts) that measures the latencies of all 14 queries individually and stores the results in .csv files.
The scripts have been executed on an EC2 instance against PostgreSQL database hosted on various providers:
A database connection is established at the beginning of each script and closed at the end.
The sample data is seeded using faker.js. To re-create identical sample datasets in a deterministic fashion, a seed value is provided to the faker instance.
The size of the dataset is configurable when running the benchmarks. It determines how many records are created per table. For example, if a dataset of size 1000 is created, there will be 1k records in the Customer, Product and Address tables and 10k records in the Order table (the Order table is multiplied with a factor of 10 because of its many-to-many relation with Product making the dataset more realistic).
To execute the benchmarks, you can invoke the script as follows:
sh ./benchmark.sh -i 500 -s 1000 --d postgresql://user:password@host:port/db
This executes the pre-defined queries 500 times against a data set of 1000 records per table. The database URL can alternatively be provisioned as an environment variable.
To collect our data, we have executed the benchmarks on production infrastructure to simulate a real-world usage scenario. The scripts have been executed from an EC2 instance with this spec:
The databases we used have the following spec:
We have published the results of the benchmarking runs we have executed on: https://benchmarks.prisma.io.
The table displays the median values of the 500 iterations we have executed to measure query latencies. Note that we have discarded outliers from the 500 iterations by removing values above the 99th percentile (p99).
The columns of the table represent the three ORM libraries, the rows show the queries that have been benchmarked.
When expanding a cell, you can view some details about the query, such as:
Benchmarks are an inherently tricky topic and difficult to get right. When companies publish performance benchmarks, these will typically show that their products are the fastest in the market while making it difficult for other people to reproduce the results and obscuring the path from raw data collection to result presentation.
We did our best to create a fair and neutral setup that's easy to understand and produces meaningful results. With that said, here are some caveats to take into account when looking at the benchmark results:
We have put a lot of effort into making it easy to run the benchmarks yourself, so give it a shot, and if you would like to contribute an improvement, feel free to reach out!
As unsatisfying as it may be, the answer is as so often: It depends. Performance is a complex and nuanced topic that depends on a variety of factors and is notoriously hard to predict.
See the performance checklist below to make sure your queries are at optimal speed.
While it's not possible to provide a conclusive answer to that question, we can try to look at the some patterns and analyze them.
First off, we found that the difference across DB providers generally seems to be negligible. For example, just looking at the plain findMany query, we can see in the median values and the histogram distribution, that performance has low variances:
| Prisma ORM | Drizzle ORM | TypeORM | |
|---|---|---|---|
| Supabase | 8.00ms | 23.09ms | 5.24ms |
| AWS RDS | 6.59ms | 19.19ms | 4.20ms |
| Neon | 11.43ms | 29.35ms | 7.25ms |
Presumably RDS has a an advantage because the benchmark scripts have been executed from an EC2 instance within the same security group while the Supabase and Neon DBs are accessible via the public internet.
For more information, be sure to visit the benchmark site to inspect our results or run the benchmark results yourself using the database provider of your choice.
If you zoom out and look at the results from a distance, you'll notice that most queries actually perform in similar ballparks with only a few milliseconds difference. As an example, here are the results we've collected on AWS RDS:
UX research shows that delays below the 100ms mark are imperceptible by users and still make a system feel instantaneous, so in most cases these small differences probably shouldn't be a driving factor in your choice of which ORM to use for the next project.
Of course, the database query latency is only one factor in the overall performance of the app you're building, so be sure to measure and optimize the other aspects as well, especially all the network boundaries your system has (such as an HTTP layer).
The Nested find all query has been especially slow with all ORMs and all database providers. It's a simple query that looks like this:
Prisma ORM
prisma.customer.findMany({
include: {
orders: true,
},
});
Drizzle ORM
db.query.Customer.findMany({
with: {
orders: true,
},
});
TypeORM
AppDataSource.getRepository(Customer).find({ relations: ["orders"] });
Taking RDS as an example, these are the median values in the results we've collected:
| Prisma ORM | Drizzle ORM | TypeORM | |
|---|---|---|---|
| Nested find all | 62.4ms | 948.29ms | 56.34ms |
This query has shown to be a lot slower for all ORMs than the other queries because it fetches data from two separate tables and the amount of data that's returned is notable.
The data we've collected doesn't allow for conclusive statements about the individual performance of each ORM. The hard truth about query performance is that it's possible to write fast as well as slow queries with each ORM library.
In the end, much of an application's performance depends on the developer's ability to follow best practices (see the performance checklist below), identify slow queries, and optimize them over time.
Performance plays a crucial role for us at Prisma and we have focused a lot recently on improving various aspects of Prisma ORM in that area, e.g. by introducing the option to use DB-level JOINs, implementing many performance improvements in v5, improving serverless cold starts by 9x or increasing the speed of $queryRaw by 2x in the last 5.17.0 release.
Here's a basic checklist that helps you ensure that your Prisma ORM queries are at optimal performance:
If you follow these recommendations but are still seeing slow queries, please open an issue on GitHub with details about your query so that we can make sure it's as fast as it should be!
To support you in measuring performance and making your Prisma ORM queries faster, we have recently launched Prisma Optimize.
Optimize captures all queries that are sent to your DB via Prisma ORM and gives you insights into their performance with detailed tracing information. In the future, Optimize will be able to give you recommendations for speeding up slow queries, e.g. by recommending when an excessive number of rows returned or where to define an index in your schema.
You can find the results of our benchmark run on https://benchmarks.prisma.io. Check them out and let us know what you think on X and Discord. We're especially keen on hearing on how we can make this benchmark even more helpful to you and welcome suggestions for improvement!