documentation/ingestion/overview.md
import Screenshot from "@theme/Screenshot"
import { Clients } from "../../src/components/Clients"
For high-throughput data ingestion, use our first-party clients with the InfluxDB Line Protocol (ILP). This is the recommended method for production workloads.
Our first-party clients are the fastest way to insert data. They excel with high-throughput, low-latency data streaming and are the recommended choice for production deployments.
To start quickly, select your language:
<Clients />Our clients utitilize the InfluxDB Line Protocol (ILP) which is an insert-only
protocol that bypasses SQL INSERT statements, thus achieving significantly
higher throughput. It also provides some key benefits:
An example of "data-in" - via the line - appears as:
trades,symbol=ETH-USD,side=sell price=2615.54,amount=0.00044 1646762637609765000\n
trades,symbol=BTC-USD,side=sell price=39269.98,amount=0.001 1646762637710419000\n
trades,symbol=ETH-USD,side=buy price=2615.4,amount=0.002 1646762637764098000\n
Once inside of QuestDB, it's yours to manipulate and query via extended SQL. Please note that table and column names must follow the QuestDB naming rules.
QuestDB is optimized for both throughput and latency. Send data when you have it - there's no need to artificially batch on the client side.
| Mode | Throughput (per connection) |
|---|---|
| Batched writes | ~400k rows/sec |
| Single-row writes | ~60-80k rows/sec |
Clients control batching via explicit flush() calls. Each flush ends a batch
and sends it to the server. If your data arrives one row at a time, send it one
row at a time - QuestDB handles this efficiently. If data arrives in bursts,
batch it naturally and flush when ready.
Server-side, WAL processing is asynchronous. Transactions are grouped into segments that roll based on size or row count, requiring no client-side tuning.
If you already have Kafka, Flink, or another streaming platform in your stack, QuestDB integrates seamlessly.
See our integration guides:
For bulk imports or one-time data loads, use the Import CSV tab in the Web Console:
<Screenshot alt="Screenshot of the UI for import" height={535} src="images/docs/console/import-ui.webp" width={800} />
For all CSV import methods, including using the APIs directly, see the CSV Import Guide.
No data yet? Just starting? No worries. We've got you covered.
There are several quick scaffolding options:
rnd_ functions and make your own data.Depending on your infrastructure, it should now be apparent which ingestion method is worth pursuing.
Of course, ingestion (data-in) is only half the battle.
Your next best step? Learn how to query and explore data-out from the Query & SQL Overview.
It might also be a solid bet to review timestamp basics.