docs/content/docs/sql/overview.md
Flink SQL enables you to develop streaming and batch applications using standard SQL. Flink's SQL support is based on Apache Calcite which implements the SQL standard. Queries have the same semantics and produce the same results regardless of whether the input is continuous (streaming) or bounded (batch).
Flink SQL integrates seamlessly with the [Table API]({{< ref "docs/dev/table/overview" >}}) and Flink's DataStream API. You can easily switch between all APIs and libraries which build upon them. For instance, you can detect patterns from a table using the [MATCH_RECOGNIZE clause]({{< ref "docs/sql/reference/queries/match_recognize" >}}) and later use the DataStream API to build alerting based on the matched patterns.
Flink SQL can be used through several interfaces depending on your use case:
| Interface | Description | Use Case |
|---|---|---|
| [SQL Client]({{< ref "docs/sql/interfaces/sql-client" >}}) | Interactive command-line interface | Ad-hoc queries, development, debugging |
| [SQL Gateway]({{< ref "docs/sql/interfaces/sql-gateway/overview" >}}) | REST and HiveServer2 endpoints | Remote SQL submission, integration with BI tools |
| [JDBC Driver]({{< ref "docs/sql/interfaces/jdbc-driver" >}}) | Standard JDBC connectivity | Application integration, BI tool connectivity |
| [Table API]({{< ref "docs/dev/table/overview" >}}) | Programmatic SQL execution | Embedded SQL in Java/Scala/Python applications |
Flink SQL is built on the concept of dynamic tables, which represent both bounded (batch) and unbounded (streaming) data. SQL queries on dynamic tables produce continuously updating results as new data arrives.
For a deeper understanding of how Flink SQL processes streaming data, see:
{{< top >}}