docs/language-reference/book/print.html
Turso is a SQLite-compatible database. This reference documents the SQL language as supported by Turso.
If you are familiar with SQLite, Turso supports most of the same SQL syntax. This reference covers only what Turso supports — features not listed here are not yet available. For Turso-specific extensions beyond SQLite (custom types, vector search, CDC, materialized views, encryption), see the Turso Extensions section.
Every statement page begins with a Syntax section showing the grammar using the following notation:
| Notation | Meaning |
|---|---|
KEYWORD | A literal SQL keyword. Keywords are case-insensitive; uppercase is used by convention. |
name | A placeholder for a user-supplied identifier (table name, column name, etc.). |
expr | A placeholder for any SQL expression. |
[X] | X is optional. |
| `{A | B}` |
| `[A | B]` |
[, ...] | The preceding element may be repeated, separated by commas. |
INSERT [OR {ROLLBACK | ABORT | FAIL | IGNORE | REPLACE}]
INTO table-name [(column-name [, ...])]
VALUES (expr [, ...]) [, ...]
This means:
INSERT is required.OR ROLLBACK, OR ABORT, etc. are optional — pick one if used.VALUES row is required, and you may provide more separated by commas.Identifiers (table names, column names) follow these rules:
"name"), square brackets ([name]), or backticks (name).'text'). Double quotes are for identifiers, not strings.Turso uses SQLite’s dynamic type system. Every value has one of five storage classes:
| Storage Class | Description |
|---|---|
| NULL | A null value. |
| INTEGER | A signed integer, stored in 1, 2, 3, 4, 6, or 8 bytes. |
| REAL | A floating-point number, stored as an 8-byte IEEE 754 float. |
| TEXT | A UTF-8 string. |
| BLOB | Raw binary data, stored exactly as provided. |
Column type names in CREATE TABLE determine the column’s type affinity , which influences how values are coerced on insertion. See Type Conversions for the full rules.
SELECT [DISTINCT | ALL] result-column [, ...]
[FROM table-or-subquery [, ...]]
[WHERE expr]
[GROUP BY expr [, ...] [HAVING expr]]
[ORDER BY ordering-term [, ...]]
[LIMIT expr [OFFSET expr]]
Where result-column is one of:
*
table-name.*
expr [[AS] column-alias]
And table-or-subquery is one of:
table-name [[AS] table-alias]
(select-statement) [[AS] table-alias]
table-or-subquery join-operator table-or-subquery join-constraint
This page covers the basic SELECT statement: the SELECT clause, FROM clause, WHERE clause, and DISTINCT/ALL keywords. For related topics, see JOINs, GROUP BY and HAVING, ORDER BY, LIMIT, OFFSET, Set Operations, Subqueries, and Common Table Expressions.
The SELECT statement queries the database and returns zero or more rows of data. Each row has a fixed number of columns determined by the result expression list. A SELECT statement does not modify the database.
Processing a SELECT statement follows four steps:
The list of expressions between SELECT and FROM is the result expression list. Each expression becomes a column in the output. Expressions can be constants, column references, computed values, or function calls.
-- Selecting literal values (no FROM clause needed)
SELECT 1 + 2;
-- Selecting with string concatenation
SELECT 10 * 2 AS doubled, 'hello' || ' ' || 'world' AS combined;
Any result expression can be given a name using the AS keyword. This alias becomes the column header in the output and can be referenced in ORDER BY and GROUP BY clauses.
SELECT name AS user_name, email AS contact
FROM users
WHERE active = 1;
The AS keyword is optional – SELECT name user_name FROM users is also valid – but including it is recommended for clarity.
The special expression * expands to all columns from all tables in the FROM clause.
-- Return all columns from the users table
SELECT * FROM users;
To expand all columns from a specific table (useful with multiple tables), use table.* or alias.*:
SELECT u.* FROM users u WHERE u.active = 1;
The * and table.* forms can only be used in the result expression list of a SELECT that has a FROM clause.
The FROM clause specifies the input data for the query. If omitted, the input is implicitly a single row with zero columns, which is useful for evaluating expressions.
-- No FROM clause: evaluate an expression directly
SELECT typeof(42), typeof(3.14), typeof('text'), typeof(NULL);
The simplest FROM clause names a single table. The query operates on all rows of that table.
SELECT name, email FROM users;
A table can be given an alias with the AS keyword (or simply by placing the alias after the table name). The alias can then be used to qualify column names.
SELECT u.name, u.email
FROM users AS u
WHERE u.active = 1;
Aliases are required when the same table appears more than once in a query, and they are convenient for shortening long table names.
When multiple tables are listed in the FROM clause separated by commas, Turso computes the Cartesian product of all rows from each table. This means every combination of rows is produced. A WHERE clause is typically used to filter the result to only the meaningful combinations.
-- Comma-separated tables with a WHERE condition (implicit join)
SELECT u.name, o.amount
FROM users AS u, orders AS o
WHERE u.id = o.user_id;
A comma between tables is equivalent to INNER JOIN or JOIN with no ON clause. For explicit join syntax with ON or USING clauses, see JOINs.
A parenthesized SELECT statement can appear in the FROM clause. The subquery is treated as a virtual table containing the data it returns. A subquery in FROM should be given an alias.
SELECT *
FROM (
SELECT user_id, sum(amount) AS total
FROM orders
GROUP BY user_id
) AS user_totals
WHERE total > 500;
Each column of the subquery inherits the type affinity and collation of the corresponding expression in the subquery’s result list.
The WHERE clause filters the input rows by evaluating its expression as a boolean for each row. Only rows where the expression evaluates to true are included in the result. Rows for which the expression evaluates to false or NULL are excluded.
-- Simple equality condition
SELECT name, email FROM users WHERE active = 1;
-- Multiple conditions with AND
SELECT name, email FROM users WHERE active = 1 AND name <> 'Alice';
-- Using OR to match alternative conditions
SELECT name, price
FROM products
WHERE category = 'Electronics' OR price < 250;
Comparisons with NULL using = or <> always evaluate to NULL (not true or false), so rows with NULL values in the compared column are excluded by such conditions. Use IS NULL and IS NOT NULL to test for null values explicitly.
-- Find rows where email is missing
SELECT name FROM users WHERE email IS NULL;
-- Find rows where email is present
SELECT name, email FROM users WHERE email IS NOT NULL;
The LIKE operator performs case-insensitive pattern matching on text values. The % wildcard matches any sequence of characters, and _ matches any single character.
-- Names starting with 'A'
SELECT name, email FROM users WHERE name LIKE 'A%';
-- Names containing 'a' anywhere (case-insensitive)
SELECT name FROM products WHERE name LIKE '%a%';
For more details, see Pattern Matching.
The BETWEEN operator tests whether a value falls within an inclusive range.
SELECT name, price FROM products WHERE price BETWEEN 200 AND 700;
For more details, see IN and BETWEEN.
The IN operator tests whether a value matches any value in a list or subquery result.
SELECT name, category
FROM products
WHERE category IN ('Electronics', 'Furniture') AND price > 500;
For more details, see IN and BETWEEN.
By default (or when ALL is specified explicitly), all result rows are returned, including duplicates. When DISTINCT is specified, duplicate rows are removed from the result set before it is returned.
-- Without DISTINCT: may contain duplicate categories
SELECT ALL category FROM products;
-- With DISTINCT: each category appears only once
SELECT DISTINCT category FROM products;
For the purposes of detecting duplicates, two NULL values are considered equal. An integer is equal to a floating-point number if they represent the same quantity. Text values are compared using the appropriate collation sequence.
The examples below use the following tables:
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT,
email TEXT,
active INTEGER
);
INSERT INTO users VALUES
(1, 'Alice', '[email protected]', 1),
(2, 'Bob', '[email protected]', 0),
(3, 'Charlie', '[email protected]', 1);
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL
);
INSERT INTO products VALUES
(1, 'Laptop', 'Electronics', 999.99),
(2, 'Phone', 'Electronics', 699.99),
(3, 'Desk', 'Furniture', 299.99),
(4, 'Chair', 'Furniture', 199.99),
(5, 'Tablet', 'Electronics', 499.99);
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
user_id INTEGER,
product TEXT,
amount REAL
);
INSERT INTO orders VALUES
(1, 1, 'Laptop', 999.99),
(2, 1, 'Phone', 699.99),
(3, 2, 'Desk', 299.99),
(4, 3, 'Chair', 199.99),
(5, 1, 'Tablet', 499.99);
SELECT * FROM users;
-- id | name | email | active
-- 1 | Alice | [email protected] | 1
-- 2 | Bob | [email protected] | 0
-- 3 | Charlie | [email protected] | 1
SELECT name, email FROM users;
-- name | email
-- Alice | [email protected]
-- Bob | [email protected]
-- Charlie | [email protected]
SELECT name, email FROM users WHERE active = 1;
-- name | email
-- Alice | [email protected]
-- Charlie | [email protected]
SELECT
name,
price,
price * 0.9 AS discounted_price
FROM products
WHERE price > 300;
-- name | price | discounted_price
-- Laptop | 999.99 | 899.991
-- Phone | 699.99 | 629.991
-- Tablet | 499.99 | 449.991
SELECT
name,
price,
CASE
WHEN price > 500 THEN 'expensive'
WHEN price > 200 THEN 'moderate'
ELSE 'affordable'
END AS price_tier
FROM products;
-- name | price | price_tier
-- Laptop | 999.99 | expensive
-- Phone | 699.99 | expensive
-- Desk | 299.99 | moderate
-- Chair | 199.99 | affordable
-- Tablet | 499.99 | moderate
SELECT u.name, o.amount
FROM users AS u, orders AS o
WHERE u.id = o.user_id;
-- name | amount
-- Alice | 999.99
-- Alice | 699.99
-- Alice | 499.99
-- Bob | 299.99
-- Charlie | 199.99
SELECT *
FROM (
SELECT user_id, sum(amount) AS total
FROM orders
GROUP BY user_id
) AS user_totals
WHERE total > 500;
-- user_id | total
-- 1 | 2199.97
SELECT DISTINCT category FROM products;
-- category
-- Electronics
-- Furniture
Turso supports the standard SELECT statement with the following note:
| Feature | Status |
|---|---|
schema.table.column (three-part names) | Not supported. Turso does not support attached databases or schema-qualified table names. Use table.column (two-part names) instead. |
table-or-subquery {[INNER] JOIN | LEFT [OUTER] JOIN | NATURAL [LEFT [OUTER]] JOIN} table-or-subquery [join-constraint]
Where join-constraint is one of:
ON expr
USING (column-name [, ...])
Tables may also be joined implicitly using a comma:
SELECT ... FROM table1, table2 WHERE expr
A JOIN combines rows from two or more tables based on a related column between them. The result of a join is a new set of rows, where each row contains columns from both tables.
Conceptually, a join starts with the cartesian product of the left and right datasets – every row from the left table paired with every row from the right table. A join constraint (ON or USING) then filters this cartesian product to only the rows where the constraint is satisfied. Different join types control what happens with rows that have no match.
Turso supports INNER JOIN, LEFT OUTER JOIN, and NATURAL JOIN. Multiple joins can be chained in a single query and are evaluated left to right.
An INNER JOIN returns only the rows where the join constraint is satisfied in both tables. Rows from either table that have no matching row in the other table are excluded from the result.
The keyword INNER is optional – JOIN by itself is equivalent to INNER JOIN.
-- These are equivalent
SELECT * FROM users INNER JOIN departments ON users.department_id = departments.id;
SELECT * FROM users JOIN departments ON users.department_id = departments.id;
A LEFT JOIN returns all rows from the left table, even if there is no matching row in the right table. When a left-table row has no match, the columns from the right table are filled with NULL.
The keyword OUTER is optional – LEFT JOIN and LEFT OUTER JOIN are equivalent.
-- These are equivalent
SELECT * FROM users LEFT JOIN departments ON users.department_id = departments.id;
SELECT * FROM users LEFT OUTER JOIN departments ON users.department_id = departments.id;
A NATURAL JOIN automatically joins two tables on all columns that share the same name in both tables. It is equivalent to a join with a USING clause that lists every common column name.
If the two tables share no column names, a NATURAL JOIN behaves like a cartesian product (every row paired with every row).
NATURAL can be combined with LEFT to form a NATURAL LEFT JOIN. A NATURAL JOIN cannot have an explicit ON or USING clause.
-- If both tables have a column named "id", this is equivalent to:
-- SELECT * FROM users JOIN profiles USING(id)
SELECT * FROM users NATURAL JOIN profiles;
Listing tables separated by commas in the FROM clause produces the cartesian product of those tables. A WHERE clause is typically used to filter the result, which is functionally equivalent to an INNER JOIN with an ON clause.
-- These produce the same result
SELECT * FROM users, departments WHERE users.department_id = departments.id;
SELECT * FROM users JOIN departments ON users.department_id = departments.id;
The ON clause specifies a boolean expression that is evaluated for each row of the cartesian product. Only rows where the expression evaluates to true are included in the result. The expression can reference columns from both tables.
SELECT users.name, departments.name
FROM users JOIN departments ON users.department_id = departments.id;
The ON clause can contain compound conditions using AND, OR, and other operators:
SELECT u.name, o.amount
FROM users u LEFT JOIN orders o ON u.id = o.user_id AND o.amount > 75.00;
The USING clause specifies one or more column names that must exist in both tables. For each named column, the join matches rows where the values are equal. This is equivalent to writing ON left.col = right.col for each column, but more concise.
An important difference from ON: the USING clause eliminates the duplicate column from the result. Only one copy of each named column appears in the output.
SELECT users.name, departments.dept_name
FROM users JOIN departments USING(department_id);
Multiple columns can be specified in a single USING clause:
SELECT * FROM t1 JOIN t2 USING(a, b);
For INNER JOINs, placing a condition in the ON clause or the WHERE clause produces the same result. For LEFT JOINs, however, the distinction matters:
Consider this example where Charlie has no department (department_id is NULL):
-- ON clause: Charlie appears with NULL department
SELECT users.name, departments.name
FROM users LEFT JOIN departments ON users.department_id = departments.id;
┌─────────┬─────────────┐
│ name │ name │
├─────────┼─────────────┤
│ Alice │ Engineering │
│ Bob │ Marketing │
│ Charlie │ │
└─────────┴─────────────┘
-- WHERE clause: Charlie is excluded because departments.name IS NOT NULL fails
SELECT users.name, departments.name
FROM users LEFT JOIN departments ON users.department_id = departments.id
WHERE departments.name IS NOT NULL;
┌───────┬─────────────┐
│ name │ name │
├───────┼─────────────┤
│ Alice │ Engineering │
│ Bob │ Marketing │
└───────┴─────────────┘
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, department_id INTEGER);
CREATE TABLE departments (id INTEGER PRIMARY KEY, name TEXT);
INSERT INTO users VALUES (1, 'Alice', 1);
INSERT INTO users VALUES (2, 'Bob', 2);
INSERT INTO users VALUES (3, 'Charlie', NULL);
INSERT INTO departments VALUES (1, 'Engineering');
INSERT INTO departments VALUES (2, 'Marketing');
INSERT INTO departments VALUES (3, 'Sales');
-- Return only users that have a matching department
SELECT users.name, departments.name
FROM users INNER JOIN departments ON users.department_id = departments.id;
┌───────┬─────────────┐
│ name │ name │
├───────┼─────────────┤
│ Alice │ Engineering │
│ Bob │ Marketing │
└───────┴─────────────┘
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, user_id INTEGER, amount REAL);
INSERT INTO users VALUES (1, 'Alice');
INSERT INTO users VALUES (2, 'Bob');
INSERT INTO users VALUES (3, 'Charlie');
INSERT INTO orders VALUES (1, 1, 99.99);
INSERT INTO orders VALUES (2, 1, 49.50);
INSERT INTO orders VALUES (3, 2, 150.00);
-- Find users who have never placed an order
SELECT u.name
FROM users u LEFT JOIN orders o ON u.id = o.user_id
WHERE o.id IS NULL;
┌─────────┐
│ name │
├─────────┤
│ Charlie │
└─────────┘
A table can be joined to itself using aliases. This is useful for hierarchical data such as employee-manager relationships.
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, manager_id INTEGER);
INSERT INTO employees VALUES (1, 'Alice', NULL);
INSERT INTO employees VALUES (2, 'Bob', 1);
INSERT INTO employees VALUES (3, 'Charlie', 1);
INSERT INTO employees VALUES (4, 'Diana', 2);
-- Show each employee alongside their manager's name
SELECT e.name AS employee, m.name AS manager
FROM employees e LEFT JOIN employees m ON e.manager_id = m.id;
┌──────────┬─────────┐
│ employee │ manager │
├──────────┼─────────┤
│ Alice │ │
│ Bob │ Alice │
│ Charlie │ Alice │
│ Diana │ Bob │
└──────────┴─────────┘
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, department_id INTEGER);
CREATE TABLE departments (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, user_id INTEGER, amount REAL);
INSERT INTO users VALUES (1, 'Alice', 1);
INSERT INTO users VALUES (2, 'Bob', 2);
INSERT INTO users VALUES (3, 'Charlie', NULL);
INSERT INTO departments VALUES (1, 'Engineering');
INSERT INTO departments VALUES (2, 'Marketing');
INSERT INTO departments VALUES (3, 'Sales');
INSERT INTO orders VALUES (1, 1, 99.99);
INSERT INTO orders VALUES (2, 1, 49.50);
INSERT INTO orders VALUES (3, 2, 150.00);
-- Show each user's department and total spending
SELECT u.name, d.name AS department, SUM(o.amount) AS total_spent
FROM users u
JOIN departments d ON u.department_id = d.id
LEFT JOIN orders o ON u.id = o.user_id
GROUP BY u.id;
┌───────┬─────────────┬─────────────┐
│ name │ department │ total_spent │
├───────┼─────────────┼─────────────┤
│ Alice │ Engineering │ 149.49 │
│ Bob │ Marketing │ 150.0 │
└───────┴─────────────┴─────────────┘
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE profiles (id INTEGER PRIMARY KEY, bio TEXT);
INSERT INTO users VALUES (1, 'Alice');
INSERT INTO users VALUES (2, 'Bob');
INSERT INTO users VALUES (3, 'Charlie');
INSERT INTO profiles VALUES (1, 'Engineer');
INSERT INTO profiles VALUES (2, 'Designer');
-- NATURAL LEFT JOIN matches on the shared "id" column
-- and preserves users with no profile
SELECT name, bio FROM users NATURAL LEFT JOIN profiles;
┌─────────┬──────────┐
│ name │ bio │
├─────────┼──────────┤
│ Alice │ Engineer │
│ Bob │ Designer │
│ Charlie │ │
└─────────┴──────────┘
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, user_id INTEGER, amount REAL);
INSERT INTO users VALUES (1, 'Alice');
INSERT INTO users VALUES (2, 'Bob');
INSERT INTO users VALUES (3, 'Charlie');
INSERT INTO orders VALUES (1, 1, 99.99);
INSERT INTO orders VALUES (2, 1, 49.50);
INSERT INTO orders VALUES (3, 2, 150.00);
-- Join against an aggregated subquery
SELECT u.name, totals.total_amount
FROM users u
JOIN (SELECT user_id, SUM(amount) AS total_amount FROM orders GROUP BY user_id) AS totals
ON u.id = totals.user_id;
┌───────┬──────────────┐
│ name │ total_amount │
├───────┼──────────────┤
│ Alice │ 149.49 │
│ Bob │ 150.0 │
└───────┴──────────────┘
CREATE TABLE colors (name TEXT);
CREATE TABLE sizes (name TEXT);
INSERT INTO colors VALUES ('red');
INSERT INTO colors VALUES ('blue');
INSERT INTO sizes VALUES ('small');
INSERT INTO sizes VALUES ('large');
-- Every combination of color and size
SELECT colors.name, sizes.name FROM colors, sizes;
┌──────┬───────┐
│ name │ name │
├──────┼───────┤
│ red │ small │
│ red │ large │
│ blue │ small │
│ blue │ large │
└──────┴───────┘
Turso does not support the following join features available in SQLite:
| Feature | Status |
|---|---|
| RIGHT JOIN / RIGHT OUTER JOIN | Not supported |
| FULL JOIN / FULL OUTER JOIN | Not supported |
| CROSS JOIN | Not supported. In SQLite, CROSS JOIN is semantically identical to INNER JOIN but hints the optimizer not to reorder the join. Turso does not parse this syntax. Use INNER JOIN or comma syntax instead. |
SELECT result-column [, ...]
FROM table-or-subquery
[WHERE where-expr]
[GROUP BY expr [, ...]]
[HAVING having-expr]
The GROUP BY clause organizes rows into groups based on one or more expressions. When GROUP BY is present, each unique combination of values in the grouping expressions forms a single group, and the query returns one row per group. Aggregate functions in the result set (such as COUNT(), SUM(), AVG(), MIN(), MAX(), and GROUP_CONCAT()) are evaluated once per group rather than once for the entire result set.
The HAVING clause filters groups after they have been formed. It works like WHERE, but operates on the grouped results rather than on individual rows. HAVING is evaluated once per group and may reference aggregate functions.
Together, GROUP BY and HAVING enable summary queries – computing totals, averages, counts, and other statistics across categories of data.
The GROUP BY clause accepts one or more expressions, separated by commas. Each expression is evaluated for every row in the input, and rows that produce equal values for all grouping expressions are combined into a single group.
Key behaviors:
CASE expressions, or function calls.GROUP BY 1 refers to the first column in the result set.GROUP BY SUM(x) is an error.BINARY.When no GROUP BY clause is present but the result set contains aggregate functions, the entire input is treated as a single group and the query returns exactly one row.
The HAVING clause contains a boolean expression that is evaluated once per group. Groups for which the expression evaluates to false (or NULL) are excluded from the result set.
HAVING may reference aggregate functions. This is the primary distinction from WHERE, which cannot.HAVING may reference values that are not in the result set.HAVING contains a non-aggregate expression, it is evaluated against an arbitrarily selected row from the group.HAVING can be used without GROUP BY. In that case the entire result set is treated as one group, and HAVING determines whether that single group is returned or discarded.When WHERE, GROUP BY, and HAVING all appear in the same query, they are processed in this order:
| Step | Clause | Purpose |
|---|---|---|
| 1 | WHERE | Filters individual rows before grouping |
| 2 | GROUP BY | Organizes remaining rows into groups |
| 3 | Aggregate functions | Computed once per group |
| 4 | HAVING | Filters groups after aggregation |
This means WHERE reduces the input before any grouping occurs, while HAVING operates on the already-grouped results. Use WHERE to exclude rows you do not want aggregated. Use HAVING to exclude groups based on aggregate values.
A “bare” column is a non-aggregate column that does not appear in the GROUP BY clause. For example:
SELECT customer, product, SUM(quantity) FROM orders GROUP BY customer;
Here customer is in GROUP BY, SUM(quantity) is an aggregate, but product is a bare column. Since each group may contain multiple distinct values for product, the value of product in the result is selected from an arbitrary row within the group.
Special behavior with MIN() and MAX(): When there is exactly one MIN() or MAX() aggregate in the query, bare columns take their values from the row that contains the minimum or maximum value. For example:
SELECT customer, product, MAX(quantity) AS max_quantity
FROM orders
GROUP BY customer;
The product value in each row comes from the input row that has the largest quantity for that customer.
This behavior is an extension beyond the SQL standard. Most other database engines require every non-aggregate column to appear in the GROUP BY clause.
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer TEXT,
product TEXT,
quantity INTEGER,
price REAL,
region TEXT
);
INSERT INTO orders VALUES (1, 'Alice', 'Widget', 5, 9.99, 'East');
INSERT INTO orders VALUES (2, 'Bob', 'Gadget', 2, 24.99, 'West');
INSERT INTO orders VALUES (3, 'Alice', 'Gadget', 1, 24.99, 'East');
INSERT INTO orders VALUES (4, 'Carol', 'Widget', 10, 9.99, 'East');
INSERT INTO orders VALUES (5, 'Bob', 'Widget', 3, 9.99, 'West');
INSERT INTO orders VALUES (6, 'Alice', 'Gizmo', 2, 49.99, 'East');
INSERT INTO orders VALUES (7, 'Carol', 'Gadget', 4, 24.99, 'West');
INSERT INTO orders VALUES (8, 'Dave', 'Widget', 7, 9.99, 'South');
-- Count orders per customer
SELECT customer, COUNT(*) AS order_count
FROM orders
GROUP BY customer;
-- Alice|3
-- Bob|2
-- Carol|2
-- Dave|1
-- Total quantity and revenue per product
SELECT product,
SUM(quantity) AS total_quantity,
SUM(quantity * price) AS total_revenue
FROM orders
GROUP BY product;
-- Gadget|7|174.93
-- Gizmo|2|99.98
-- Widget|25|249.75
-- Customers who spent more than $100
SELECT customer, SUM(quantity * price) AS total_spent
FROM orders
GROUP BY customer
HAVING SUM(quantity * price) > 100;
-- Alice|174.92
-- Carol|199.86
-- Quantity sold per region and product
SELECT region, product, SUM(quantity) AS total_quantity
FROM orders
GROUP BY region, product;
-- East|Gadget|1
-- East|Gizmo|2
-- East|Widget|15
-- South|Widget|7
-- West|Gadget|6
-- West|Widget|3
-- Classify products into price tiers and count orders per tier
SELECT CASE WHEN price < 20 THEN 'Budget' ELSE 'Premium' END AS tier,
COUNT(*) AS order_count
FROM orders
GROUP BY CASE WHEN price < 20 THEN 'Budget' ELSE 'Premium' END;
-- Budget|4
-- Premium|4
-- Products in the East region with total quantity above 3
SELECT product, SUM(quantity) AS total_quantity
FROM orders
WHERE region = 'East'
GROUP BY product
HAVING SUM(quantity) > 3;
-- Widget|15
-- Summary statistics per product
SELECT product,
AVG(quantity) AS avg_quantity,
MIN(quantity) AS min_quantity,
MAX(quantity) AS max_quantity
FROM orders
GROUP BY product;
-- Gadget|2.33333333333333|1|4
-- Gizmo|2.0|2|2
-- Widget|6.25|3|10
-- Count unique customers and products per region
SELECT region,
COUNT(DISTINCT customer) AS unique_customers,
COUNT(DISTINCT product) AS unique_products
FROM orders
GROUP BY region;
-- East|2|3
-- South|1|1
-- West|2|2
-- List distinct products purchased by each customer
SELECT customer, GROUP_CONCAT(DISTINCT product) AS products
FROM orders
GROUP BY customer;
-- Alice|Widget,Gadget,Gizmo
-- Bob|Gadget,Widget
-- Carol|Widget,Gadget
-- Dave|Widget
When aggregate functions appear without a GROUP BY clause, the entire table is treated as one group:
-- Overall totals across all orders
SELECT COUNT(*) AS total_orders, SUM(quantity) AS total_items
FROM orders;
-- 8|34
-- Return total revenue only if it exceeds $500
SELECT SUM(quantity * price) AS revenue
FROM orders
HAVING SUM(quantity * price) > 500;
-- 524.66
CREATE TABLE survey (
id INTEGER PRIMARY KEY,
respondent TEXT,
rating INTEGER
);
INSERT INTO survey VALUES (1, 'Alice', 5);
INSERT INTO survey VALUES (2, NULL, 3);
INSERT INTO survey VALUES (3, 'Bob', 4);
INSERT INTO survey VALUES (4, NULL, 2);
INSERT INTO survey VALUES (5, 'Alice', 3);
-- NULL respondents are grouped together
SELECT respondent, COUNT(*) AS responses, AVG(rating) AS avg_rating
FROM survey
GROUP BY respondent;
-- (NULL)|2|2.5
-- Alice|2|4.0
-- Bob|1|4.0
-- Group by the first column in the result set
SELECT customer, COUNT(*) AS order_count
FROM orders
GROUP BY 1;
-- Alice|3
-- Bob|2
-- Carol|2
-- Dave|1
SELECT result-column [, ...]
FROM table-or-subquery
[WHERE expr]
[GROUP BY expr [, ...]]
[ORDER BY ordering-term [, ...]]
[LIMIT expr [{OFFSET expr | , expr}]]
Where each ordering-term is:
expr [{ASC | DESC}] [COLLATE collation-name]
The ORDER BY clause determines the order in which rows are returned by a SELECT statement. Without an ORDER BY clause, the order of rows in the result set is undefined – the database may return them in any order it chooses, and that order may differ between executions.
The LIMIT clause places an upper bound on the number of rows returned. The optional OFFSET clause (or the comma syntax) skips a number of rows before returning results. Together, ORDER BY, LIMIT, and OFFSET are the building blocks for sorted output, top-N queries, and pagination.
These clauses appear at the end of a SELECT statement, after any WHERE, GROUP BY, and HAVING clauses. In a compound SELECT (using UNION, INTERSECT, or EXCEPT), only the final SELECT may include ORDER BY and LIMIT, and they apply to the entire compound result.
The ORDER BY clause accepts one or more ordering terms, separated by commas. Each term is an expression that defines a sort key. Rows are sorted by the first term; ties are broken by the second term, and so on.
Each ordering term is evaluated against every row. The resulting values are compared to determine output order. By default, rows are sorted in ascending order (ASC), where smaller values come first.
Sort direction:
| Keyword | Behavior |
|---|---|
ASC | Ascending order (default). Smaller values first. |
DESC | Descending order. Larger values first. |
NULL handling:
Turso considers NULL values to be smaller than any other value for sorting purposes. This means:
ASC), NULLs appear at the beginning of the result set.DESC), NULLs appear at the end of the result set.Expression resolution:
Each ORDER BY expression is resolved in the following order of precedence:
CASE expressions.Collation:
Text values are compared using a collation sequence. The collation used for each ordering term is determined by this precedence:
COLLATE collation-name, that collation is used.BINARY collation is used.The built-in collation sequences are:
| Collation | Behavior |
|---|---|
BINARY | Byte-by-byte comparison. Uppercase letters sort before lowercase. Default. |
NOCASE | Case-insensitive comparison for ASCII characters. |
RTRIM | Like BINARY, but trailing spaces are ignored. |
The LIMIT clause restricts the number of rows returned. It takes a single expression that must evaluate to an integer (or a value that can be losslessly converted to an integer).
LIMIT is most useful when combined with ORDER BY. Without ORDER BY, the set of rows returned by LIMIT is arbitrary and unpredictable.
The OFFSET clause skips a specified number of rows from the beginning of the result set before returning rows. The expression must evaluate to an integer (or a value that can be losslessly converted to an integer).
When both LIMIT and OFFSET are specified, the first M rows are skipped (where M is the OFFSET value), and then the next N rows are returned (where N is the LIMIT value). If the result set contains fewer than M + N rows, all rows after the first M are returned.
Two syntax forms:
Turso supports two equivalent ways to write LIMIT with OFFSET:
| Form | Syntax | Meaning |
|---|---|---|
| Keyword form | LIMIT N OFFSET M | Skip M rows, return at most N rows |
| Comma form | LIMIT M, N | Skip M rows, return at most N rows |
Note that the comma form reverses the order of the values: the first value is the offset and the second is the limit. This is a common source of confusion. The keyword form (LIMIT N OFFSET M) is recommended for clarity.
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
category TEXT
);
INSERT INTO products VALUES (1, 'Keyboard', 49.99, 'Electronics');
INSERT INTO products VALUES (2, 'Notebook', 5.99, 'Stationery');
INSERT INTO products VALUES (3, 'Monitor', 299.99, 'Electronics');
INSERT INTO products VALUES (4, 'Pen', 1.99, 'Stationery');
INSERT INTO products VALUES (5, 'Mouse', 29.99, 'Electronics');
-- Sort by price, cheapest first (ascending is the default)
SELECT name, price FROM products ORDER BY price;
-- Pen|1.99
-- Notebook|5.99
-- Mouse|29.99
-- Keyboard|49.99
-- Monitor|299.99
-- Sort by price, most expensive first
SELECT name, price FROM products ORDER BY price DESC;
-- Monitor|299.99
-- Keyboard|49.99
-- Mouse|29.99
-- Notebook|5.99
-- Pen|1.99
-- Sort by category ascending, then by price descending within each category
SELECT name, category, price FROM products ORDER BY category ASC, price DESC;
-- Monitor|Electronics|299.99
-- Keyboard|Electronics|49.99
-- Mouse|Electronics|29.99
-- Notebook|Stationery|5.99
-- Pen|Stationery|1.99
-- Order by the second column in the result set (price)
SELECT name, price FROM products ORDER BY 2;
-- Pen|1.99
-- Notebook|5.99
-- Mouse|29.99
-- Keyboard|49.99
-- Monitor|299.99
-- Order by a computed column alias
SELECT name, price * 1.1 AS price_with_tax
FROM products ORDER BY price_with_tax LIMIT 3;
-- Pen|2.189
-- Notebook|6.589
-- Mouse|32.989
-- Sort by name length (descending), then alphabetically for ties
SELECT name, length(name) AS name_len
FROM products ORDER BY length(name) DESC, name ASC LIMIT 3;
-- Keyboard|8
-- Notebook|8
-- Monitor|7
CREATE TABLE words (word TEXT);
INSERT INTO words VALUES ('banana');
INSERT INTO words VALUES ('Apple');
INSERT INTO words VALUES ('cherry');
INSERT INTO words VALUES ('Blueberry');
-- Default BINARY collation: uppercase letters sort before lowercase
SELECT word FROM words ORDER BY word;
-- Apple
-- Blueberry
-- banana
-- cherry
-- NOCASE collation: case-insensitive sorting
SELECT word FROM words ORDER BY word COLLATE NOCASE;
-- Apple
-- banana
-- Blueberry
-- cherry
CREATE TABLE scores (student TEXT, score INTEGER);
INSERT INTO scores VALUES ('Alice', 90);
INSERT INTO scores VALUES ('Bob', NULL);
INSERT INTO scores VALUES ('Carol', 85);
INSERT INTO scores VALUES ('Dave', NULL);
-- Ascending: NULLs appear first (NULLs are considered smaller than all other values)
SELECT student, score FROM scores ORDER BY score ASC;
-- Bob|
-- Dave|
-- Carol|85
-- Alice|90
-- Descending: NULLs appear last
SELECT student, score FROM scores ORDER BY score DESC;
-- Alice|90
-- Carol|85
-- Bob|
-- Dave|
-- Return the 3 cheapest products
SELECT name, price FROM products ORDER BY price LIMIT 3;
-- Pen|1.99
-- Notebook|5.99
-- Mouse|29.99
-- Skip the 2 cheapest products, return the next 2
SELECT name, price FROM products ORDER BY price LIMIT 2 OFFSET 2;
-- Mouse|29.99
-- Keyboard|49.99
-- Same result as above: LIMIT offset, count
SELECT name, price FROM products ORDER BY price LIMIT 2, 2;
-- Mouse|29.99
-- Keyboard|49.99
-- A negative LIMIT returns all rows (no upper bound)
SELECT name FROM products ORDER BY name LIMIT -1;
-- Keyboard
-- Monitor
-- Mouse
-- Notebook
-- Pen
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES (1, 'Alice', 'Engineering', 95000);
INSERT INTO employees VALUES (2, 'Bob', 'Marketing', 72000);
INSERT INTO employees VALUES (3, 'Carol', 'Engineering', 98000);
INSERT INTO employees VALUES (4, 'Dave', 'Sales', 68000);
INSERT INTO employees VALUES (5, 'Eve', 'Marketing', 75000);
INSERT INTO employees VALUES (6, 'Frank', 'Engineering', 102000);
INSERT INTO employees VALUES (7, 'Grace', 'Sales', 71000);
INSERT INTO employees VALUES (8, 'Heidi', 'Engineering', 91000);
-- Page 1 (first 3 employees by salary descending)
SELECT name, salary FROM employees ORDER BY salary DESC LIMIT 3 OFFSET 0;
-- Frank|102000.0
-- Carol|98000.0
-- Alice|95000.0
-- Page 2 (next 3)
SELECT name, salary FROM employees ORDER BY salary DESC LIMIT 3 OFFSET 3;
-- Heidi|91000.0
-- Eve|75000.0
-- Bob|72000.0
-- Page 3 (remaining)
SELECT name, salary FROM employees ORDER BY salary DESC LIMIT 3 OFFSET 6;
-- Grace|71000.0
-- Dave|68000.0
NULLS FIRST and NULLS LAST are not yet fully supported. NULLS LAST returns a parse error. NULLS FIRST is accepted by the parser but does not change the sort behavior – NULLs always sort as the smallest values regardless. This means there is currently no way to override the default NULL placement in sort results.select-statement {UNION | UNION ALL | INTERSECT | EXCEPT} select-statement
A compound SELECT chains two or more simple SELECT statements with a set operator. Multiple operators can be chained:
select-statement op select-statement [op select-statement ...]
An optional LIMIT clause may appear after the last SELECT:
select-statement op select-statement [LIMIT expr [OFFSET expr]]
Set operations combine the results of two or more SELECT statements into a single result set. Each constituent SELECT must return the same number of columns. Columns are matched by position (left to right), not by name. The column names in the final result are taken from the leftmost SELECT statement.
Turso supports four set operators:
| Operator | Duplicates | Description |
|---|---|---|
UNION ALL | Keeps all | Returns every row from both queries, including duplicates. |
UNION | Removed | Returns all rows from both queries, removing duplicate rows. |
INTERSECT | Removed | Returns only rows that appear in both queries. |
EXCEPT | Removed | Returns rows from the left query that do not appear in the right query. |
UNION, INTERSECT, and EXCEPT all remove duplicate rows from the final result. UNION ALL is the only operator that preserves duplicates.
UNION ALL returns all rows from the left SELECT followed by all rows from the right SELECT. No duplicate detection or removal is performed, making UNION ALL the most efficient set operator.
-- Combine all employees and contractors, keeping duplicates
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors;
If a person appears in both tables with the same name and department, that combination appears twice in the result.
UNION works the same way as UNION ALL but removes duplicate rows from the combined result. Two rows are considered duplicates when every corresponding column value is equal.
-- Combine employees and contractors, removing duplicates
SELECT name, department FROM employees
UNION
SELECT name, department FROM contractors;
INTERSECT returns only rows that appear in both the left and right result sets. The output contains no duplicates.
-- Find people who appear in both tables with the same department
SELECT name, department FROM employees
INTERSECT
SELECT name, department FROM contractors;
EXCEPT returns rows from the left query that are not present in the right query. The output contains no duplicates. The order of the two queries matters: A EXCEPT B is different from B EXCEPT A.
-- Find employees who are not also contractors (by name and department)
SELECT name, department FROM employees
EXCEPT
SELECT name, department FROM contractors;
A LIMIT clause may appear after the final SELECT in a compound statement. The limit applies to the entire combined result, not just the last SELECT.
-- Get the first 4 rows from the combined result
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
LIMIT 4;
OFFSET is also supported:
-- Skip the first row and return the next 3
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
LIMIT 3 OFFSET 1;
All SELECT statements in a compound query must produce the same number of result columns. If they do not, Turso returns an error:
-- Error: different number of columns
SELECT name, department FROM employees
UNION ALL
SELECT name FROM contractors;
-- SELECTs to the left and right of UNION ALL do not have the same number of result columns
Columns are matched by position. The first column of the left SELECT pairs with the first column of the right SELECT, and so on. Column names and types do not need to match – only the count must be equal.
The result column names are always determined by the leftmost SELECT:
SELECT 1 AS first_col, 2 AS second_col
UNION ALL
SELECT 3, 4;
-- Column headers are "first_col" and "second_col"
For the purpose of identifying duplicate rows in UNION, INTERSECT, and EXCEPT:
= operator.-- NULL is treated as equal to NULL for dedup purposes
SELECT NULL UNION SELECT NULL;
-- Returns one row (a single NULL)
Three or more SELECT statements can be connected with set operators. When chained, they group from left to right. That is, A op1 B op2 C is evaluated as (A op1 B) op2 C.
-- Three-way UNION ALL: employees, contractors, and interns
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
UNION ALL
SELECT name, department FROM interns;
Different operators can be mixed in the same compound statement:
-- First combine and deduplicate, then append without dedup
SELECT name, department FROM employees
UNION
SELECT name, department FROM contractors
UNION ALL
SELECT name, department FROM interns;
A compound SELECT can be used as a subquery in the FROM clause. This is useful for aggregating or filtering the combined result:
-- Count headcount per department across all worker types
SELECT department, COUNT(*) AS headcount
FROM (
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
)
GROUP BY department;
-- Filter the combined result
SELECT *
FROM (
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
)
WHERE department = 'Engineering';
The examples below use the following tables:
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 95000),
(2, 'Bob', 'Engineering', 88000),
(3, 'Carol', 'Marketing', 72000),
(4, 'Dave', 'Marketing', 68000),
(5, 'Eve', 'Sales', 75000);
CREATE TABLE contractors (
id INTEGER PRIMARY KEY,
name TEXT,
department TEXT,
rate REAL
);
INSERT INTO contractors VALUES
(1, 'Frank', 'Engineering', 110000),
(2, 'Grace', 'Marketing', 65000),
(3, 'Alice', 'Engineering', 95000);
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors;
-- Alice|Engineering
-- Bob|Engineering
-- Carol|Marketing
-- Dave|Marketing
-- Eve|Sales
-- Frank|Engineering
-- Grace|Marketing
-- Alice|Engineering
Note that “Alice | Engineering” appears twice because UNION ALL does not remove duplicates.
SELECT name, department FROM employees
UNION
SELECT name, department FROM contractors;
-- Alice|Engineering
-- Bob|Engineering
-- Carol|Marketing
-- Dave|Marketing
-- Eve|Sales
-- Frank|Engineering
-- Grace|Marketing
The duplicate “Alice | Engineering” row has been removed. The result has 7 rows instead of 8.
SELECT name, department FROM employees
INTERSECT
SELECT name, department FROM contractors;
-- Alice|Engineering
Only “Alice | Engineering” appears in both tables.
SELECT name, department FROM employees
EXCEPT
SELECT name, department FROM contractors;
-- Bob|Engineering
-- Carol|Marketing
-- Dave|Marketing
-- Eve|Sales
Alice is excluded because she appears in both tables with the same name and department.
SELECT department FROM employees
INTERSECT
SELECT department FROM contractors;
-- Engineering
-- Marketing
SELECT department FROM employees
EXCEPT
SELECT department FROM contractors;
-- Sales
Sales exists only in the employees table.
SELECT department, COUNT(*) AS headcount
FROM (
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
)
GROUP BY department;
-- Engineering|4
-- Marketing|3
-- Sales|1
Note that UNION ALL is used here so that each person is counted, even if they appear in both tables.
SELECT name, department FROM employees
UNION ALL
SELECT name, department FROM contractors
LIMIT 4;
-- Alice|Engineering
-- Bob|Engineering
-- Carol|Marketing
-- Dave|Marketing
VALUES (1, 'Alice'), (2, 'Bob')
UNION ALL
VALUES (3, 'Carol');
-- 1|Alice
-- 2|Bob
-- 3|Carol
| Feature | Status |
|---|---|
| ORDER BY on compound SELECT | Not yet supported. Turso does not currently allow ORDER BY on the final result of a compound SELECT. As a workaround, wrap the compound SELECT in a subquery and apply ORDER BY to the outer query. |
A subquery is a SELECT statement enclosed in parentheses, used as an expression or table source within another SQL statement.
-- Scalar subquery (returns a single value)
(SELECT expr FROM table-name [WHERE ...])
-- EXISTS / NOT EXISTS
[NOT] EXISTS (SELECT ... FROM table-name [WHERE ...])
-- IN / NOT IN with subquery
expr [NOT] IN (SELECT expr FROM table-name [WHERE ...])
-- Derived table (subquery in FROM clause)
SELECT ... FROM (SELECT ... FROM table-name) AS alias
Subqueries allow you to nest one query inside another. They appear in several forms depending on context: as a single value in an expression (scalar subquery), as a set membership test (IN), as an existence check (EXISTS), or as a virtual table in the FROM clause (derived table).
A subquery can reference columns from its enclosing query. When it does, it is called a correlated subquery and is re-evaluated for each row of the outer query. When a subquery does not reference any outer columns, it is an uncorrelated subquery and may be evaluated once and its result reused.
A scalar subquery is a SELECT enclosed in parentheses that returns a single column. It can appear anywhere an expression is allowed: in the SELECT list, WHERE clause, HAVING clause, or even LIMIT and OFFSET.
The value of a scalar subquery is the value from the first row returned. If the subquery returns no rows, the result is NULL. If the subquery returns more than one column, Turso raises an error.
-- Scalar subquery in SELECT list
SELECT expr, (SELECT agg_func(...) FROM table-name) AS alias FROM table-name
-- Scalar subquery in WHERE clause
SELECT ... FROM table-name WHERE column > (SELECT agg_func(...) FROM table-name)
The EXISTS operator takes a subquery and evaluates to 1 (true) if the subquery returns at least one row, or 0 (false) if the subquery returns no rows. NOT EXISTS returns the opposite.
The number of columns returned by the subquery and their values do not matter. Only the presence or absence of rows is significant. By convention, SELECT 1 is often used inside EXISTS subqueries.
-- Returns rows from outer query where the subquery matches at least one row
SELECT ... FROM table-name t
WHERE EXISTS (SELECT 1 FROM other-table o WHERE o.ref_id = t.id)
-- Returns rows where no matching row exists
SELECT ... FROM table-name t
WHERE NOT EXISTS (SELECT 1 FROM other-table o WHERE o.ref_id = t.id)
EXISTS is commonly used with correlated subqueries to test whether related rows exist in another table.
The IN operator tests whether a value is a member of the set of values returned by a subquery. The subquery must return a single column. NOT IN tests the inverse.
expr [NOT] IN (SELECT column FROM table-name [WHERE ...])
The behavior of IN and NOT IN with NULL values follows specific rules:
| Left Value | Subquery Contains NULL | Value Found | IN Result | NOT IN Result |
|---|---|---|---|---|
| non-NULL | no | no | 0 (false) | 1 (true) |
| non-NULL | no | yes | 1 (true) | 0 (false) |
| non-NULL | yes | no | NULL | NULL |
| non-NULL | yes | yes | 1 (true) | 0 (false) |
| NULL | any | (empty set) | 0 (false) | 1 (true) |
| NULL | any | (non-empty) | NULL | NULL |
Key takeaways:
IN always returns 0 and NOT IN always returns 1, regardless of NULL values.IN returns 1 even if the set also contains NULL.NULL, the result is NULL (unknown), not 0. This is because the value might match the unknown (NULL) element.A correlated subquery references one or more columns from the outer query. Turso re-evaluates a correlated subquery for each row processed by the outer query.
-- The inner query references e.department_id from the outer query
SELECT ... FROM employees e
WHERE e.salary > (
SELECT AVG(e2.salary) FROM employees e2
WHERE e2.department_id = e.department_id
)
Correlated subqueries are supported in the SELECT list, WHERE clause, HAVING clause, and GROUP BY clause. Correlated subqueries can also be used with EXISTS and IN.
Note: Correlated subqueries in the ORDER BY clause are not currently supported.
A subquery in the FROM clause creates a derived table (also called an inline view). The subquery result is treated as a temporary table for the duration of the outer query.
A derived table must be given an alias using the AS keyword.
SELECT alias.column [, ...]
FROM (SELECT ... FROM table-name [WHERE ...] [GROUP BY ...]) AS alias
[JOIN other-table ON ...]
[WHERE ...]
Each column of the derived table inherits the type affinity and collation sequence of the corresponding expression in the subquery.
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department_id INTEGER,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 1, 95000), (2, 'Bob', 1, 88000),
(3, 'Carol', 2, 72000), (4, 'Dave', 3, 68000),
(5, 'Eve', 1, 105000), (6, 'Frank', 2, 71000);
-- Show each employee's salary alongside the company average
SELECT name, salary,
(SELECT AVG(salary) FROM employees) AS avg_salary
FROM employees
ORDER BY salary DESC;
-- Eve | 105000.0 | 83166.6666666667
-- Alice | 95000.0 | 83166.6666666667
-- Bob | 88000.0 | 83166.6666666667
-- Carol | 72000.0 | 83166.6666666667
-- Frank | 71000.0 | 83166.6666666667
-- Dave | 68000.0 | 83166.6666666667
-- Find employees earning above the company average
SELECT name, salary
FROM employees
WHERE salary > (SELECT AVG(salary) FROM employees);
-- Alice | 95000.0
-- Bob | 88000.0
-- Eve | 105000.0
When a scalar subquery matches no rows, it returns NULL:
SELECT (SELECT name FROM employees WHERE id = 999) AS result;
-- (NULL)
CREATE TABLE departments (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
INSERT INTO departments VALUES
(1, 'Engineering'), (2, 'Marketing'),
(3, 'Sales'), (4, 'HR');
-- Departments that have at least one employee
SELECT d.name
FROM departments d
WHERE EXISTS (
SELECT 1 FROM employees e WHERE e.department_id = d.id
);
-- Engineering
-- Marketing
-- Sales
-- Departments with no employees assigned
SELECT d.name
FROM departments d
WHERE NOT EXISTS (
SELECT 1 FROM employees e WHERE e.department_id = d.id
);
-- HR
-- Employees in the Engineering department
SELECT name
FROM employees
WHERE department_id IN (
SELECT id FROM departments WHERE name = 'Engineering'
);
-- Alice
-- Bob
-- Eve
-- Employees outside the Engineering department
SELECT name
FROM employees
WHERE department_id NOT IN (
SELECT id FROM departments WHERE name = 'Engineering'
);
-- Carol
-- Dave
-- Frank
-- Show each employee with their department name
SELECT e.name, e.salary,
(SELECT d.name FROM departments d WHERE d.id = e.department_id) AS dept_name
FROM employees e
ORDER BY e.salary DESC;
-- Eve | 105000.0 | Engineering
-- Alice | 95000.0 | Engineering
-- Bob | 88000.0 | Engineering
-- Carol | 72000.0 | Marketing
-- Frank | 71000.0 | Marketing
-- Dave | 68000.0 | Sales
-- Employees earning above their department's average salary
SELECT e.name, e.salary, e.department_id
FROM employees e
WHERE e.salary > (
SELECT AVG(e2.salary)
FROM employees e2
WHERE e2.department_id = e.department_id
);
-- Carol | 72000.0 | 2
-- Eve | 105000.0 | 1
-- Count employees per department
SELECT d.name,
(SELECT COUNT(*) FROM employees e WHERE e.department_id = d.id) AS emp_count
FROM departments d;
-- Engineering | 3
-- Marketing | 2
-- Sales | 1
-- HR | 0
-- Average salary per department using a derived table
SELECT dept_name, avg_salary
FROM (
SELECT d.name AS dept_name, AVG(e.salary) AS avg_salary
FROM departments d
JOIN employees e ON d.id = e.department_id
GROUP BY d.name
) AS dept_stats
ORDER BY avg_salary DESC;
-- Engineering | 96000.0
-- Marketing | 71500.0
-- Sales | 68000.0
-- Join a derived table of aggregate stats back to the departments table
SELECT c.name AS department, stats.emp_count, stats.avg_salary
FROM departments c
JOIN (
SELECT department_id,
COUNT(*) AS emp_count,
AVG(salary) AS avg_salary
FROM employees
GROUP BY department_id
) AS stats ON stats.department_id = c.id;
-- Engineering | 3 | 96000.0
-- Marketing | 2 | 71500.0
-- Sales | 1 | 68000.0
CREATE TABLE products (
id INTEGER PRIMARY KEY, name TEXT,
category_id INTEGER, price REAL
);
CREATE TABLE categories (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE reviews (id INTEGER PRIMARY KEY, product_id INTEGER, rating INTEGER);
INSERT INTO categories VALUES (1, 'Electronics'), (2, 'Clothing');
INSERT INTO products VALUES
(1, 'Laptop', 1, 999.99),
(2, 'T-Shirt', 2, 29.99),
(3, 'Headphones', 1, 79.99);
INSERT INTO reviews VALUES (1, 1, 5), (2, 1, 4), (3, 3, 3);
-- Electronics with at least one review rated 4 or higher
SELECT p.name, p.price
FROM products p
WHERE EXISTS (
SELECT 1 FROM reviews r
WHERE r.product_id = p.id AND r.rating >= 4
)
AND p.category_id IN (
SELECT id FROM categories WHERE name = 'Electronics'
);
-- Laptop | 999.99
Turso supports scalar subqueries used with comparison operators (e.g. WHERE x > (SELECT ...)), but does not support row value subqueries such as (x, y) = (SELECT a, b FROM ...). Only single-column subqueries are valid in comparison contexts.
Correlated subqueries in the ORDER BY clause are not yet supported. Uncorrelated subqueries in ORDER BY work as expected.
WITH cte-name [(column-name [, ...])] AS (select-stmt)
[, cte-name [(column-name [, ...])] AS (select-stmt) [, ...]]
{SELECT | INSERT | UPDATE | DELETE} ...
A Common Table Expression (CTE) is a named temporary result set defined within a WITH clause. CTEs exist only for the duration of the statement they are attached to. They behave like temporary views: you define them once and can reference them by name in the main statement that follows.
CTEs make complex queries easier to read by breaking them into named, reusable pieces. Instead of deeply nested subqueries, you can define each logical step as a separate CTE and compose them together.
A single WITH clause can define multiple CTEs, separated by commas. Later CTEs in the list can reference earlier ones, allowing you to build up results incrementally.
The WITH keyword introduces one or more CTE definitions. It must appear at the beginning of a top-level SELECT, INSERT, UPDATE, or DELETE statement.
WITH cte-name AS (select-stmt)
Each CTE definition consists of:
WITH clause; duplicate names produce an error.SELECT statement (including compound SELECT with UNION, UNION ALL, INTERSECT, or EXCEPT) that defines the CTE’s contents.You can optionally specify explicit column names for the CTE by listing them in parentheses after the CTE name:
WITH cte-name (column-name [, ...]) AS (select-stmt)
When column names are provided, they replace whatever column names the select-stmt would otherwise produce. This is useful when the CTE body contains expressions without natural names, or when you want to rename columns for clarity.
Multiple CTEs are separated by commas within a single WITH clause. Each subsequent CTE can reference any CTE defined before it:
WITH
first AS (select-stmt),
second AS (select-stmt), -- can reference 'first'
third AS (select-stmt) -- can reference 'first' and 'second'
SELECT ... FROM third;
The select-stmt inside a CTE definition can be a compound SELECT using set operators:
| Operator | Behavior |
|---|---|
UNION | Combines results, removing duplicate rows |
UNION ALL | Combines results, keeping all rows including duplicates |
INTERSECT | Returns only rows present in both result sets |
EXCEPT | Returns rows from the first result set that are not in the second |
The compound SELECT can also include LIMIT and OFFSET to restrict the CTE’s result set.
The most common use of CTEs. The main SELECT can reference any defined CTE by name in its FROM clause, in subqueries, or in WHERE clause conditions. A CTE can be referenced multiple times within the same statement.
WITH cte AS (select-stmt)
SELECT ... FROM cte;
A WITH clause can precede an INSERT statement. The CTEs are visible in the INSERT ... SELECT source query, in scalar subqueries within a VALUES clause, and in RETURNING clause subqueries.
WITH cte AS (select-stmt)
INSERT INTO table-name SELECT ... FROM cte;
A WITH clause can precede an UPDATE statement. The CTEs are visible in the WHERE clause, SET expressions, and RETURNING clause of the UPDATE.
WITH cte AS (select-stmt)
UPDATE table-name SET ... WHERE ... IN (SELECT ... FROM cte);
A WITH clause can precede a DELETE statement. The CTEs are visible in the WHERE clause and RETURNING clause of the DELETE.
WITH cte AS (select-stmt)
DELETE FROM table-name WHERE ... IN (SELECT ... FROM cte);
-- Define a simple CTE and select from it
WITH recent_cutoff AS (SELECT 30 AS days)
SELECT days FROM recent_cutoff;
-- 30
-- Rename the CTE columns to 'sum' and 'product'
WITH calculations(sum, product) AS (SELECT 3 + 4, 3 * 4)
SELECT sum, product FROM calculations;
-- 7|12
-- Chain CTEs: the second references the first
WITH
base AS (SELECT 10 AS val),
doubled AS (SELECT val * 2 AS val FROM base)
SELECT * FROM doubled;
-- 20
-- Build a pipeline of CTEs, each referencing the previous one
WITH
step1 AS (SELECT 1 AS x),
step2 AS (SELECT x FROM step1),
step3 AS (SELECT x FROM step2),
step4 AS (SELECT x FROM step3)
SELECT * FROM step4;
-- 1
-- Combine two result sets, removing duplicates
WITH statuses AS (
SELECT 'active' AS status
UNION
SELECT 'inactive'
UNION
SELECT 'pending'
)
SELECT * FROM statuses ORDER BY 1;
-- active
-- inactive
-- pending
-- UNION ALL preserves all rows, enabling accurate counts
WITH all_scores AS (
SELECT 1 AS score
UNION ALL
SELECT 2
UNION ALL
SELECT 3
)
SELECT COUNT(*) FROM all_scores;
-- 3
-- Restrict the CTE result set using LIMIT and OFFSET
WITH numbers AS (
SELECT 1 UNION SELECT 2 UNION SELECT 3
LIMIT 2 OFFSET 1
)
SELECT * FROM numbers ORDER BY 1;
-- 2
-- 3
-- Reference the same CTE twice to form a cross join
WITH codes AS (SELECT 1 AS x UNION SELECT 2)
SELECT * FROM codes AS a, codes AS b ORDER BY 1, 2;
-- 1|1
-- 1|2
-- 2|1
-- 2|2
-- A CTE can be referenced inside scalar subqueries in the SELECT list
WITH constants AS (SELECT 10 AS x, 20 AS y)
SELECT (SELECT x FROM constants), (SELECT y FROM constants);
-- 10|20
-- Use a CTE to supply rows for an INSERT ... SELECT
CREATE TABLE orders(amount);
WITH new_orders AS (
SELECT 100 UNION SELECT 250 UNION SELECT 75
)
INSERT INTO orders SELECT * FROM new_orders;
SELECT * FROM orders ORDER BY 1;
-- 75
-- 100
-- 250
-- CTEs work with the RETURNING clause
CREATE TABLE items(name TEXT);
WITH new_items AS (SELECT 'widget' AS name)
INSERT INTO items SELECT * FROM new_items RETURNING name;
-- widget
-- Reference a CTE inside a VALUES clause via scalar subquery
CREATE TABLE settings(key TEXT, value INT);
WITH defaults AS (SELECT 99 AS x)
INSERT INTO settings VALUES ('threshold', (SELECT x FROM defaults));
SELECT * FROM settings;
-- threshold|99
-- Use a CTE to identify rows to update
CREATE TABLE products(id INTEGER, price REAL);
INSERT INTO products VALUES (1, 10.0), (2, 20.0), (3, 30.0);
WITH expensive AS (SELECT id FROM products WHERE price > 15.0)
UPDATE products SET price = price * 0.9
WHERE id IN (SELECT id FROM expensive);
SELECT * FROM products ORDER BY id;
-- 1|10.0
-- 2|18.0
-- 3|27.0
-- Use a CTE to identify rows to delete
CREATE TABLE tasks(id INTEGER, status TEXT);
INSERT INTO tasks VALUES (1, 'done'), (2, 'pending'), (3, 'done');
WITH completed AS (SELECT id FROM tasks WHERE status = 'done')
DELETE FROM tasks WHERE id IN (SELECT id FROM completed);
SELECT * FROM tasks;
-- 2|pending
-- Combine multiple CTEs to build complex conditions
CREATE TABLE inventory(id INTEGER);
INSERT INTO inventory VALUES (1),(2),(3),(4),(5);
WITH
low AS (SELECT 1 UNION SELECT 2),
high AS (SELECT 4 UNION SELECT 5)
DELETE FROM inventory
WHERE id IN (SELECT * FROM low) OR id IN (SELECT * FROM high);
SELECT * FROM inventory;
-- 3
-- Chain CTEs: the first provides base data, the second transforms it
CREATE TABLE results(value INTEGER);
WITH
base AS (SELECT 1 AS x),
transformed AS (SELECT x + 10 FROM base)
INSERT INTO results SELECT * FROM transformed;
SELECT * FROM results;
-- 11
-- Reference a CTE inside the RETURNING clause
CREATE TABLE logs(entry TEXT);
INSERT INTO logs VALUES ('old_entry');
WITH marker AS (SELECT 99 AS code)
DELETE FROM logs WHERE entry = 'old_entry'
RETURNING (SELECT code FROM marker);
-- 99
Turso supports the WITH clause with the following limitations compared to SQLite:
| Feature | Status |
|---|---|
| Ordinary CTEs | Supported |
Multiple CTEs in one WITH clause | Supported |
CTE with SELECT, INSERT, UPDATE, DELETE | Supported |
| Explicit CTE column names | Supported |
Compound SELECT in CTE body (UNION, UNION ALL, INTERSECT, EXCEPT) | Supported |
RECURSIVE CTEs | Not supported |
MATERIALIZED / NOT MATERIALIZED hints | Not supported |
Non-SELECT statements in CTE body (INSERT, UPDATE, DELETE inside the CTE definition) | Not supported |
Only SELECT statements (including compound SELECT) are allowed in the CTE body. The main statement that follows the WITH clause can be SELECT, INSERT, UPDATE, or DELETE.
{INSERT | REPLACE} [OR {ROLLBACK | ABORT | FAIL | IGNORE | REPLACE}]
INTO table-name [AS alias] [(column-name [, ...])]
{VALUES (expr [, ...]) [, ...] | SELECT ... | DEFAULT VALUES}
[RETURNING expr [AS alias] [, ...]]
The INSERT statement creates new rows in an existing table. There are three forms: INSERT … VALUES inserts one or more explicitly specified rows, INSERT … SELECT inserts rows produced by a query, and INSERT … DEFAULT VALUES inserts a single row where every column takes its default value.
The keyword REPLACE is shorthand for INSERT OR REPLACE. When a constraint violation occurs, REPLACE deletes the conflicting row and inserts the new one.
An optional parenthesized list of column names may appear after the table name. When provided, the number of values in each VALUES row (or columns in the SELECT result) must match the number of listed columns. Columns not named in the list receive their default value, or NULL if no default is defined.
When the column list is omitted, the number of values must match the total number of columns in the table, and values are assigned left-to-right.
-- With column list: unlisted columns get defaults
INSERT INTO products (name, price) VALUES ('Widget', 9.99);
-- Without column list: must supply every column
INSERT INTO products VALUES (1, 'Widget', 9.99, 1);
The VALUES clause provides one or more rows of literal expressions. Multiple rows are separated by commas.
-- Single row
INSERT INTO products (name, price) VALUES ('Widget', 9.99);
-- Multiple rows
INSERT INTO products (name, price)
VALUES ('Widget', 9.99), ('Gadget', 24.95), ('Gizmo', 4.50);
A SELECT statement may be used instead of VALUES to insert rows produced by a query. Any valid SELECT is allowed, including compound SELECTs (UNION, UNION ALL, INTERSECT, EXCEPT) and SELECTs with ORDER BY or LIMIT.
If a column list is specified, the number of columns in the SELECT result must match the number of listed columns. Otherwise it must match the total number of columns in the target table.
CREATE TABLE products_archive (id INTEGER PRIMARY KEY, name TEXT, price REAL);
INSERT INTO products_archive SELECT id, name, price FROM products WHERE in_stock = 0;
The DEFAULT VALUES form inserts exactly one row. Every column receives its default value as specified in the CREATE TABLE statement, or NULL if no default was defined.
CREATE TABLE logs (id INTEGER PRIMARY KEY, created_at TEXT DEFAULT 'now', msg TEXT DEFAULT 'empty');
INSERT INTO logs DEFAULT VALUES;
-- Result: id=1, created_at='now', msg='empty'
By prefixing the INSERT with OR algorithm, you can control what happens when the insertion would violate a constraint (UNIQUE, NOT NULL, CHECK, or PRIMARY KEY). The keyword appears between INSERT and INTO.
| Algorithm | Behavior |
|---|---|
| ABORT | Abort the current statement and roll back any changes it made. This is the default behavior when no algorithm is specified. |
| ROLLBACK | Abort the current statement and roll back the entire enclosing transaction. |
| FAIL | Abort the current statement but keep changes made by earlier rows within the same statement. |
| IGNORE | Skip the row that caused the violation and continue processing remaining rows. |
| REPLACE | Delete the existing row that caused the conflict, then insert the new row. If the conflicting column has a NOT NULL constraint with a DEFAULT value, the default is used when NULL is supplied. If there is no default, the statement fails. |
Foreign key constraint violations are not affected by the conflict algorithm. They always behave like ABORT regardless of which algorithm is specified.
-- IGNORE: silently skip rows that violate constraints
CREATE TABLE users (id INTEGER PRIMARY KEY, email TEXT UNIQUE);
INSERT INTO users VALUES (1, '[email protected]');
INSERT OR IGNORE INTO users VALUES (2, '[email protected]');
-- The second row is silently skipped; the table still contains only row 1.
-- REPLACE: delete the conflicting row and insert the new one
INSERT OR REPLACE INTO users VALUES (2, '[email protected]');
-- Row 1 is deleted, row 2 with the same email is inserted.
The REPLACE keyword (without INSERT OR) is equivalent to INSERT OR REPLACE:
REPLACE INTO users VALUES (3, '[email protected]');
The RETURNING clause causes the INSERT statement to return values from each inserted row, much like a SELECT. It accepts a list of expressions that may reference columns of the inserted row, use functions, or contain arbitrary expressions. Use * to return all columns.
CREATE TABLE orders (id INTEGER PRIMARY KEY, product TEXT, qty INTEGER);
INSERT INTO orders (product, qty) VALUES ('Widget', 5) RETURNING *;
-- Returns: 1|Widget|5
INSERT INTO orders (product, qty)
VALUES ('Gadget', 3), ('Gizmo', 12)
RETURNING id, product;
-- Returns:
-- 2|Gadget
-- 3|Gizmo
RETURNING expressions can include functions and computed values:
CREATE TABLE line_items (id INTEGER PRIMARY KEY, product TEXT, qty INTEGER, unit_price REAL);
INSERT INTO line_items (product, qty, unit_price)
VALUES ('Widget', 5, 9.99)
RETURNING id, product, qty * unit_price AS total;
-- Returns: 1|Widget|49.95
For full details on the RETURNING clause, see RETURNING.
The optional AS alias after the table name provides an alternative name for the table. This alias is primarily useful with the UPSERT clause (ON CONFLICT … DO UPDATE), which is documented separately.
-- Create a sample table
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL DEFAULT 50000.00
);
-- Insert a single row, specifying all columns
INSERT INTO employees VALUES (1, 'Alice Johnson', 'Engineering', 95000.00);
-- Insert with a column list; salary gets its default value
INSERT INTO employees (id, name, department)
VALUES (2, 'Bob Smith', 'Marketing');
-- Insert multiple rows at once
INSERT INTO employees (name, department, salary) VALUES
('Carol White', 'Engineering', 105000.00),
('David Brown', 'Sales', 72000.00),
('Eve Davis', 'Marketing', 68000.00);
-- Insert from a SELECT: copy all engineers into a new table
CREATE TABLE engineers (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO engineers
SELECT id, name, salary FROM employees WHERE department = 'Engineering';
-- Insert from a compound SELECT
CREATE TABLE all_names (name TEXT);
INSERT INTO all_names
SELECT name FROM employees
UNION ALL
SELECT name FROM engineers;
-- Insert a single row with all defaults
CREATE TABLE events (id INTEGER PRIMARY KEY, description TEXT DEFAULT 'unknown');
INSERT INTO events DEFAULT VALUES;
-- Result: id=1, description='unknown'
-- INSERT OR IGNORE: skip rows that would violate a UNIQUE constraint
CREATE TABLE tags (id INTEGER PRIMARY KEY, label TEXT UNIQUE);
INSERT INTO tags (label) VALUES ('urgent'), ('review'), ('done');
INSERT OR IGNORE INTO tags (label) VALUES ('urgent'), ('new'), ('done');
-- Only 'new' is inserted; 'urgent' and 'done' are skipped.
SELECT label FROM tags ORDER BY label;
-- done
-- new
-- review
-- urgent
-- INSERT OR REPLACE: replace the conflicting row
CREATE TABLE settings (key TEXT PRIMARY KEY, value TEXT);
INSERT INTO settings VALUES ('theme', 'light');
INSERT OR REPLACE INTO settings VALUES ('theme', 'dark');
SELECT * FROM settings;
-- theme|dark
-- REPLACE shorthand (equivalent to INSERT OR REPLACE)
REPLACE INTO settings VALUES ('theme', 'solarized');
-- INSERT with RETURNING to get generated IDs
CREATE TABLE tickets (id INTEGER PRIMARY KEY, title TEXT, priority INTEGER);
INSERT INTO tickets (title, priority)
VALUES ('Fix login bug', 1), ('Update docs', 3)
RETURNING id, title;
-- 1|Fix login bug
-- 2|Update docs
-- INSERT ... SELECT with RETURNING
CREATE TABLE source (name TEXT, amount INTEGER);
INSERT INTO source VALUES ('Alice', 10), ('Bob', 20);
CREATE TABLE totals (name TEXT, amount INTEGER);
INSERT INTO totals SELECT * FROM source RETURNING *;
-- Alice|10
-- Bob|20
-- INSERT OR IGNORE with RETURNING: only inserted rows are returned
CREATE TABLE codes (id INTEGER PRIMARY KEY, code TEXT UNIQUE);
INSERT INTO codes (code) VALUES ('A');
INSERT OR IGNORE INTO codes (code) VALUES ('A'), ('B') RETURNING id, code;
-- 2|B
-- (The duplicate 'A' is skipped and does not appear in the output.)
INSERT [OR {ROLLBACK | ABORT | FAIL | IGNORE | REPLACE}]
INTO table-name [(column-name [, ...])]
{VALUES (expr [, ...]) [, ...] | select-stmt}
[ON CONFLICT [(column-name [, ...]) [WHERE expr]]
{DO NOTHING | DO UPDATE SET assignment [, ...] [WHERE expr]}
] [, ...]
Where each assignment is:
column-name = expr
or:
(column-name [, ...]) = (expr [, ...])
UPSERT is not a standalone statement. It is an optional clause that can be appended to an INSERT statement to control what happens when the insertion would violate a uniqueness constraint (a UNIQUE column, a UNIQUE index, or a PRIMARY KEY). Without an UPSERT clause, a uniqueness violation causes the statement to fail with an error.
An UPSERT clause begins with ON CONFLICT and specifies one of two actions: DO NOTHING, which silently skips the conflicting row, or DO UPDATE SET, which converts the insert into an update of the existing row. This gives you an atomic “insert or update” operation in a single statement, eliminating the need for separate existence checks.
Turso follows the PostgreSQL-style UPSERT syntax. UPSERT only applies to uniqueness constraints. It does not intercept NOT NULL, CHECK, or foreign key violations – those always produce an error regardless of any ON CONFLICT clause.
The conflict target appears between ON CONFLICT and DO. It specifies which uniqueness constraint should trigger the upsert behavior.
ON CONFLICT (column-name [, ...]) [WHERE expr] DO ...
The column list must exactly match the columns of a UNIQUE index or PRIMARY KEY. For a composite unique index, all columns must be listed, though the order does not matter.
-- Matches a UNIQUE index on (a, b), regardless of column order
ON CONFLICT (b, a) DO UPDATE SET ...
The conflict target is optional on the last (or only) ON CONFLICT clause. When omitted, the clause matches any uniqueness constraint violation that has not already been handled by a preceding ON CONFLICT clause.
-- Omitted target: matches any uniqueness violation
ON CONFLICT DO NOTHING
If a conflict target includes a WHERE clause, it becomes a partial conflict target. The WHERE expression restricts which rows of a partial unique index are considered when matching. This is relevant when the unique index itself was created with a WHERE clause.
ON CONFLICT [(conflict-target)] DO NOTHING
When a uniqueness constraint is violated, the conflicting row is silently skipped. No insert or update occurs for that row. If the INSERT statement includes a RETURNING clause, skipped rows produce no output.
ON CONFLICT [(conflict-target)] DO UPDATE SET assignment [, ...] [WHERE expr]
When a uniqueness constraint is violated, Turso updates the existing row instead of inserting a new one. The SET clause works the same as in a regular UPDATE statement. You can set individual columns or use the tuple form to set multiple columns at once:
-- Individual assignments
DO UPDATE SET price = excluded.price, quantity = excluded.quantity
-- Tuple assignment
DO UPDATE SET (price, quantity) = (excluded.price, excluded.quantity)
Within the SET clause and its optional WHERE clause, column references that are unqualified or qualified with the target table name refer to the existing row (before the update). To reference the values that were proposed for insertion, use the special excluded table qualifier.
| Reference | Meaning |
|---|---|
column-name | Value in the existing row |
table-name.column-name | Value in the existing row (explicit) |
excluded.column-name | Value from the attempted INSERT |
The DO UPDATE clause always uses ABORT conflict resolution internally. If the update itself causes a constraint violation (for example, setting a column to a value that duplicates another row’s unique key), the entire INSERT statement is rolled back.
An optional WHERE clause after DO UPDATE SET controls whether the update actually takes effect. If the condition evaluates to false or NULL, the update is skipped for that row, effectively making the clause behave like DO NOTHING for that particular conflict.
ON CONFLICT (name) DO UPDATE SET
phonenumber = excluded.phonenumber,
valid_date = excluded.valid_date
WHERE excluded.valid_date > table-name.valid_date
This is useful for “only update if the new data is newer” patterns, or for conditional merges.
An INSERT statement may include more than one ON CONFLICT clause. Turso evaluates them in order. When a uniqueness violation occurs, the first clause whose conflict target matches the violated constraint is used. Only one clause executes per conflicting row.
Every ON CONFLICT clause except the last one must include a conflict target. The last clause may omit the conflict target to serve as a catch-all for any remaining uniqueness violations.
INSERT INTO table-name (...)
VALUES (...)
ON CONFLICT (x) DO UPDATE SET ... -- handles conflicts on x
ON CONFLICT (y) DO UPDATE SET ... -- handles conflicts on y
ON CONFLICT DO NOTHING; -- catch-all for any other uniqueness violation
When an INSERT provides multiple rows (either through multiple VALUES rows or a SELECT subquery), the upsert decision is made independently for each row. Some rows may be inserted normally, while others trigger DO UPDATE or DO NOTHING.
-- Create a table with a unique constraint
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT UNIQUE,
price REAL,
quantity INTEGER
);
INSERT INTO products VALUES (1, 'Widget', 9.99, 100);
-- DO NOTHING: silently skip if the name already exists
INSERT INTO products VALUES (2, 'Widget', 12.99, 200)
ON CONFLICT DO NOTHING;
SELECT * FROM products;
-- 1|Widget|9.99|100
-- DO UPDATE: update price and quantity when name conflicts
INSERT INTO products VALUES (1, 'Widget', 12.99, 200)
ON CONFLICT(name) DO UPDATE SET
price = excluded.price,
quantity = excluded.quantity;
SELECT * FROM products;
-- 1|Widget|12.99|200
-- Mix existing row values with excluded values
CREATE TABLE counters (key TEXT UNIQUE, hits INTEGER, last_seen TEXT);
INSERT INTO counters VALUES ('page_home', 1, '2024-01-01');
-- Increment hits while updating last_seen from the new row
INSERT INTO counters VALUES ('page_home', 1, '2024-06-15')
ON CONFLICT(key) DO UPDATE SET
hits = hits + 1,
last_seen = excluded.last_seen;
SELECT * FROM counters;
-- page_home|2|2024-06-15
-- Conditional update: only apply if the new value is greater
CREATE TABLE high_scores (player TEXT UNIQUE, score INTEGER);
INSERT INTO high_scores VALUES ('Alice', 5);
INSERT INTO high_scores VALUES ('Alice', 3)
ON CONFLICT(player) DO UPDATE SET score = excluded.score
WHERE excluded.score > score;
SELECT * FROM high_scores;
-- Alice|5 (unchanged because 3 is not greater than 5)
INSERT INTO high_scores VALUES ('Alice', 10)
ON CONFLICT(player) DO UPDATE SET score = excluded.score
WHERE excluded.score > score;
SELECT * FROM high_scores;
-- Alice|10 (updated because 10 > 5)
-- Multi-row insert with upsert: each row handled independently
CREATE TABLE inventory (sku TEXT UNIQUE, name TEXT);
INSERT INTO inventory VALUES ('A001', 'Original');
INSERT INTO inventory VALUES ('A001', 'Updated'), ('B002', 'New Item')
ON CONFLICT(sku) DO UPDATE SET name = excluded.name;
SELECT * FROM inventory ORDER BY sku;
-- A001|Updated
-- B002|New Item
-- Multiple ON CONFLICT clauses with different targets
CREATE TABLE records (
id INTEGER PRIMARY KEY,
code TEXT UNIQUE,
email TEXT UNIQUE,
note TEXT DEFAULT NULL
);
INSERT INTO records VALUES (1, 'x', '[email protected]', 'original');
INSERT INTO records VALUES (2, 'y', '[email protected]', 'original');
INSERT INTO records VALUES (3, 'x', '[email protected]', 'new')
ON CONFLICT(code) DO UPDATE SET note = 'code-conflict'
ON CONFLICT(email) DO UPDATE SET note = 'email-conflict'
ON CONFLICT DO UPDATE SET note = 'other-conflict';
SELECT * FROM records ORDER BY id;
-- 1|x|[email protected]|code-conflict
-- 2|y|[email protected]|original
-- Upsert with RETURNING clause
CREATE TABLE settings (key TEXT PRIMARY KEY, value TEXT);
INSERT INTO settings VALUES ('theme', 'light');
INSERT INTO settings VALUES ('theme', 'dark')
ON CONFLICT DO UPDATE SET value = excluded.value
RETURNING key, value;
-- theme|dark
-- DO NOTHING with RETURNING produces no output for skipped rows
CREATE TABLE tags (name TEXT PRIMARY KEY);
INSERT INTO tags VALUES ('important');
INSERT INTO tags VALUES ('important')
ON CONFLICT DO NOTHING
RETURNING name;
-- (no output)
-- Composite unique index: target must list all columns
CREATE TABLE assignments (project TEXT, employee TEXT, role TEXT);
CREATE UNIQUE INDEX assignments_pk ON assignments(project, employee);
INSERT INTO assignments VALUES ('Atlas', 'Alice', 'Lead');
-- Column order in the target does not need to match the index
INSERT INTO assignments VALUES ('Atlas', 'Alice', 'Manager')
ON CONFLICT(employee, project) DO UPDATE SET role = excluded.role;
SELECT * FROM assignments;
-- Atlas|Alice|Manager
-- Using the table-qualified name in the conflict target
CREATE TABLE metrics (sensor_id INTEGER UNIQUE, reading REAL);
INSERT INTO metrics VALUES (1, 23.5);
INSERT INTO metrics VALUES (1, 25.0)
ON CONFLICT(metrics.sensor_id) DO UPDATE SET
reading = metrics.reading + excluded.reading;
SELECT * FROM metrics;
-- 1|48.5
UPSERT is fully supported in Turso. The syntax and behavior match SQLite, including support for multiple ON CONFLICT clauses, the excluded table, conditional WHERE clauses on DO UPDATE, composite conflict targets, and tuple-form SET assignments.
UPDATE [OR {ROLLBACK | ABORT | FAIL | IGNORE | REPLACE}]
table-name [AS alias]
SET {column-name = expr | (column-name [, ...]) = (expr [, ...])} [, ...]
[WHERE expr]
[RETURNING expr [AS alias] [, ...]]
[LIMIT expr [OFFSET expr]]
The UPDATE statement modifies the values of columns in zero or more rows of an existing table. Each SET clause assigns a new value to a column. If no WHERE clause is provided, every row in the table is updated. When a WHERE clause is present, only rows for which the expression evaluates to true are modified.
It is not an error if the WHERE clause matches zero rows. The statement completes successfully and modifies nothing.
All expressions on the right-hand side of SET assignments are evaluated before any assignments are made. This means SET expressions can safely reference the current (pre-update) values of any column in the same row, including columns that appear on the left-hand side of another assignment in the same statement.
The SET clause specifies one or more column assignments. Each assignment is either a single column name paired with an expression, or a parenthesized list of column names paired with a matching parenthesized list of expressions (row value syntax).
-- Single column assignment
UPDATE products SET price = 19.99 WHERE id = 1;
-- Multiple column assignments
UPDATE products SET price = 19.99, in_stock = 1 WHERE id = 1;
-- Row value syntax (equivalent to individual assignments)
UPDATE products SET (price, in_stock) = (19.99, 1) WHERE id = 1;
Columns not mentioned in the SET clause retain their existing values. If a column name appears more than once in the SET clause, all but the rightmost occurrence are ignored.
The WHERE clause limits which rows are updated. Only rows for which the expression evaluates to true are affected. The expression can be any valid SQL expression, including subqueries.
-- Update rows matching a condition
UPDATE orders SET status = 'shipped' WHERE status = 'pending';
-- Update using a subquery in WHERE
UPDATE orders SET status = 'priority'
WHERE customer_id IN (SELECT id FROM customers WHERE tier = 'gold');
By prefixing the UPDATE with OR algorithm, you can control what happens when an updated value would violate a constraint (UNIQUE, NOT NULL, CHECK, or PRIMARY KEY). The keyword appears between UPDATE and the table name.
| Algorithm | Behavior |
|---|---|
| ABORT | Abort the current statement and roll back any changes it made. This is the default behavior when no algorithm is specified. |
| ROLLBACK | Abort the current statement and roll back the entire enclosing transaction. |
| FAIL | Abort the current statement but keep changes already made to earlier rows within the same statement. |
| IGNORE | Skip the row that caused the violation and continue processing remaining rows. |
| REPLACE | Delete the existing row that conflicts with the updated value, then apply the update. If the conflicting column has a NOT NULL constraint with a DEFAULT value, the default is used when NULL is supplied. If there is no default, the statement fails. |
Foreign key constraint violations are not affected by the conflict algorithm. They always behave like ABORT regardless of which algorithm is specified.
-- IGNORE: skip updates that would violate a UNIQUE constraint
CREATE TABLE users (id INTEGER PRIMARY KEY, email TEXT UNIQUE);
INSERT INTO users VALUES (1, '[email protected]'), (2, '[email protected]');
UPDATE OR IGNORE users SET email = '[email protected]' WHERE id = 2;
-- Row 2 is unchanged because the update would violate the UNIQUE constraint.
-- REPLACE: delete the conflicting row, then apply the update
UPDATE OR REPLACE users SET email = '[email protected]' WHERE id = 2;
-- Row 1 is deleted, row 2 now has email '[email protected]'.
The RETURNING clause causes the UPDATE statement to return values from each modified row, much like a SELECT. It accepts a list of expressions that may reference columns of the updated row (with their new, post-update values), use functions, or contain arbitrary expressions. Use * to return all columns.
UPDATE employees SET salary = salary * 1.10 WHERE department = 'Engineering'
RETURNING id, name, salary;
For full details on the RETURNING clause, see RETURNING.
The LIMIT clause restricts the maximum number of rows that the UPDATE modifies. A negative value for LIMIT means no limit. When OFFSET is specified, the first N rows that would otherwise be updated are skipped.
Note that without ORDER BY (which Turso does not currently support for UPDATE), the order in which rows are considered is arbitrary. Therefore, LIMIT and OFFSET choose from an unpredictable set of qualifying rows.
-- Update at most 1 row
UPDATE products SET featured = 1 WHERE category = 'electronics' LIMIT 1;
-- Update 2 rows, skipping the first 3
UPDATE logs SET archived = 1 LIMIT 2 OFFSET 3;
-- Update a single row
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, department TEXT, salary REAL);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 85000.0),
(2, 'Bob', 'Marketing', 72000.0),
(3, 'Charlie', 'Engineering', 92000.0);
UPDATE employees SET salary = 90000.0 WHERE id = 1;
SELECT * FROM employees WHERE id = 1;
-- 1|Alice|Engineering|90000.0
-- Update multiple columns at once
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, department TEXT, salary REAL);
INSERT INTO employees VALUES (2, 'Bob', 'Marketing', 72000.0);
UPDATE employees SET department = 'Sales', salary = 78000.0 WHERE id = 2;
SELECT * FROM employees WHERE id = 2;
-- 2|Bob|Sales|78000.0
-- Update all rows using an expression that references the current value
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 100.0), (2, 'Bob', 200.0), (3, 'Charlie', 300.0);
UPDATE employees SET salary = salary * 2;
SELECT * FROM employees ORDER BY id;
-- 1|Alice|200.0
-- 2|Bob|400.0
-- 3|Charlie|600.0
-- Self-referencing expression: columns on the right side use pre-update values
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 10.0), (2, 'Bob', 20.0);
UPDATE employees SET salary = salary + 5.0 WHERE salary < 15.0;
SELECT * FROM employees ORDER BY id;
-- 1|Alice|15.0
-- 2|Bob|20.0
-- Update with a subquery in the WHERE clause
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, department TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 'Engineering', 85000.0), (2, 'Bob', 'Sales', 72000.0);
CREATE TABLE priority_departments (name TEXT);
INSERT INTO priority_departments VALUES ('Engineering');
UPDATE employees SET salary = salary + 10000.0
WHERE department IN (SELECT name FROM priority_departments);
SELECT name, salary FROM employees ORDER BY id;
-- Alice|95000.0
-- Bob|72000.0
-- Scalar subquery in SET to assign a computed value
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 80000.0), (2, 'Bob', 60000.0);
UPDATE employees SET salary = (SELECT AVG(salary) FROM employees) WHERE id = 2;
SELECT * FROM employees ORDER BY id;
-- 1|Alice|80000.0
-- 2|Bob|70000.0
-- Update with EXISTS subquery
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER, status TEXT);
INSERT INTO orders VALUES (1, 100, 'pending'), (2, 101, 'pending'), (3, 102, 'pending');
CREATE TABLE order_items (order_id INTEGER, product TEXT);
INSERT INTO order_items VALUES (1, 'widget'), (3, 'gadget');
UPDATE orders SET status = 'has_items'
WHERE EXISTS (SELECT 1 FROM order_items WHERE order_items.order_id = orders.id);
SELECT id, status FROM orders ORDER BY id;
-- 1|has_items
-- 2|pending
-- 3|has_items
-- UPDATE with RETURNING to see new values
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 85000.0), (2, 'Bob', 72000.0);
UPDATE employees SET salary = salary + 10000.0 WHERE id = 1 RETURNING id, name, salary;
-- 1|Alice|95000.0
-- RETURNING with expressions and functions
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 85000.0);
UPDATE employees SET name = 'alice johnson' WHERE id = 1
RETURNING id, upper(name), salary;
-- 1|ALICE JOHNSON|85000.0
-- RETURNING all columns with *
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
INSERT INTO employees VALUES (1, 'Alice', 85000.0);
UPDATE employees SET salary = 90000.0 WHERE id = 1 RETURNING *;
-- 1|Alice|90000.0
-- Row value syntax for multiple assignments
CREATE TABLE contacts (id INTEGER PRIMARY KEY, first_name TEXT, last_name TEXT);
INSERT INTO contacts VALUES (1, 'Jane', 'Doe');
UPDATE contacts SET (first_name, last_name) = ('John', 'Smith') WHERE id = 1
RETURNING *;
-- 1|John|Smith
-- UPDATE OR IGNORE: skip rows that would violate a UNIQUE constraint
CREATE TABLE tags (id INTEGER PRIMARY KEY, label TEXT UNIQUE);
INSERT INTO tags VALUES (1, 'urgent'), (2, 'review');
UPDATE OR IGNORE tags SET label = 'urgent' WHERE id = 2;
SELECT * FROM tags ORDER BY id;
-- 1|urgent
-- 2|review
-- UPDATE OR REPLACE: delete the conflicting row and apply the update
CREATE TABLE tags (id INTEGER PRIMARY KEY, label TEXT UNIQUE);
INSERT INTO tags VALUES (1, 'urgent'), (2, 'review');
UPDATE OR REPLACE tags SET label = 'urgent' WHERE id = 2;
SELECT * FROM tags ORDER BY id;
-- 2|urgent
-- UPDATE with LIMIT
CREATE TABLE tasks (id INTEGER PRIMARY KEY, done INTEGER DEFAULT 0);
INSERT INTO tasks (id) VALUES (1), (2), (3), (4), (5);
UPDATE tasks SET done = 1 LIMIT 2;
SELECT COUNT(*) FROM tasks WHERE done = 1;
-- 2
-- Update the rowid directly
CREATE TABLE notes (content TEXT);
INSERT INTO notes (content) VALUES ('hello');
UPDATE notes SET rowid = 42;
SELECT rowid, content FROM notes;
-- 42|hello
-- Update using a table alias in the WHERE clause
CREATE TABLE scores (player TEXT, points INTEGER);
INSERT INTO scores VALUES ('Alice', 10), ('Bob', 20);
UPDATE scores AS s SET points = 99 WHERE s.player = 'Alice';
SELECT * FROM scores ORDER BY player;
-- Alice|99
-- Bob|20
Turso supports the core UPDATE statement with full compatibility. The following features are not yet available:
| Feature | Status |
|---|---|
| UPDATE … FROM | Not supported. Use subqueries in SET or WHERE instead. |
| ORDER BY clause | Not supported. LIMIT and OFFSET select from an arbitrary set of qualifying rows. |
| INDEXED BY / NOT INDEXED | Not supported. The query planner chooses indexes automatically. |
[WITH cte-name AS (SELECT ...) [, ...]]
DELETE FROM table-name
[WHERE expr]
[RETURNING expr [AS alias] [, ...]]
[LIMIT expr]
The DELETE statement removes rows from a table. If a WHERE clause is provided, only the rows for which the WHERE expression evaluates to true are removed. Rows where the expression evaluates to false or NULL are retained.
If the WHERE clause is omitted, all rows in the table are deleted. The table itself is not dropped – it remains in the schema with zero rows.
The optional RETURNING clause causes the DELETE statement to return values from each deleted row, behaving much like a SELECT over the rows being removed. The optional LIMIT clause restricts the maximum number of rows deleted.
The WHERE clause specifies which rows to delete. It accepts any SQL expression that evaluates to a boolean result. Only rows where the expression is true are deleted. The expression may reference columns of the target table, use subqueries, and include any supported operators or functions.
When the WHERE clause is omitted, every row in the table is deleted.
-- Delete a single row by primary key
DELETE FROM employees WHERE id = 42;
-- Delete rows matching a compound condition
DELETE FROM employees WHERE salary < 90000 AND department = 'Marketing';
-- Delete rows using an IN list
DELETE FROM orders WHERE status IN ('cancelled', 'expired');
The RETURNING clause causes the DELETE statement to return values from each deleted row. It accepts a comma-separated list of expressions that may reference columns of the deleted row, use functions, or contain computed values. Use * to return all columns.
DELETE FROM products WHERE quantity = 0 RETURNING id, name;
DELETE FROM employees WHERE department = 'Sales' RETURNING *;
For full details on the RETURNING clause, see RETURNING.
The LIMIT clause restricts the maximum number of rows deleted. When present, at most expr rows are removed. A negative LIMIT value means no limit.
-- Delete at most 2 rows from the table
DELETE FROM employees LIMIT 2;
A DELETE statement may be preceded by a WITH clause that defines one or more common table expressions. CTEs defined this way can be referenced in the WHERE clause or in subqueries within the statement.
WITH low_earners AS (
SELECT id FROM employees WHERE salary < 90000
)
DELETE FROM employees WHERE id IN (SELECT id FROM low_earners);
For more on CTEs, see Common Table Expressions.
-- Create and populate a sample table
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00),
(4, 'Dave', 'Sales', 90000.00),
(5, 'Eve', 'Marketing', 78000.00);
-- Delete a single row by primary key
DELETE FROM employees WHERE id = 4;
SELECT * FROM employees;
-- 1|Alice|Engineering|120000.0
-- 2|Bob|Marketing|85000.0
-- 3|Carol|Engineering|110000.0
-- 5|Eve|Marketing|78000.0
-- Delete all rows matching a condition
DELETE FROM employees WHERE department = 'Marketing';
SELECT * FROM employees;
-- 1|Alice|Engineering|120000.0
-- 3|Carol|Engineering|110000.0
-- 4|Dave|Sales|90000.0
-- Delete all rows from a table (table itself remains)
DELETE FROM employees;
SELECT * FROM employees;
-- (no rows returned)
-- Delete with RETURNING to see which rows were removed
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
quantity INTEGER
);
INSERT INTO products VALUES
(1, 'Widget', 9.99, 100),
(2, 'Gadget', 24.99, 0),
(3, 'Doohickey', 4.99, 0),
(4, 'Thingamajig', 14.99, 50);
DELETE FROM products WHERE quantity = 0 RETURNING id, name;
-- 2|Gadget
-- 3|Doohickey
-- RETURNING with a computed expression
DELETE FROM products WHERE quantity = 0
RETURNING id, name, price * quantity AS lost_value;
-- 2|Gadget|0.0
-- 3|Doohickey|0.0
-- RETURNING * returns all columns of each deleted row
DELETE FROM products WHERE price > 100.00 RETURNING *;
-- Delete using a subquery in WHERE
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00);
DELETE FROM employees
WHERE id IN (SELECT id FROM employees WHERE department = 'Engineering');
SELECT * FROM employees;
-- 2|Bob|Marketing|85000.0
-- Delete with a CTE
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00),
(4, 'Dave', 'Sales', 90000.00),
(5, 'Eve', 'Marketing', 78000.00);
WITH low_earners AS (
SELECT id FROM employees WHERE salary < 90000
)
DELETE FROM employees WHERE id IN (SELECT id FROM low_earners);
SELECT * FROM employees;
-- 1|Alice|Engineering|120000.0
-- 3|Carol|Engineering|110000.0
-- 4|Dave|Sales|90000.0
-- Delete with LIMIT: remove at most 2 rows
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00),
(4, 'Dave', 'Sales', 90000.00),
(5, 'Eve', 'Marketing', 78000.00);
DELETE FROM employees LIMIT 2;
SELECT * FROM employees;
-- 3|Carol|Engineering|110000.0
-- 4|Dave|Sales|90000.0
-- 5|Eve|Marketing|78000.0
-- Delete with WHERE and LIMIT: remove one matching row
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00);
DELETE FROM employees WHERE department = 'Engineering' LIMIT 1;
SELECT * FROM employees;
-- 2|Bob|Marketing|85000.0
-- 3|Carol|Engineering|110000.0
-- Use changes() to check how many rows were deleted
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
department TEXT,
salary REAL
);
INSERT INTO employees VALUES
(1, 'Alice', 'Engineering', 120000.00),
(2, 'Bob', 'Marketing', 85000.00),
(3, 'Carol', 'Engineering', 110000.00),
(4, 'Dave', 'Sales', 90000.00),
(5, 'Eve', 'Marketing', 78000.00);
DELETE FROM employees WHERE department = 'Marketing';
SELECT changes();
-- 2
Turso supports the DELETE statement with broad compatibility to SQLite, including the WHERE clause, RETURNING clause, LIMIT, and common table expressions.
The following differences from SQLite apply:
ORDER BY on DELETE is not supported. In SQLite, ORDER BY is available (when compiled with SQLITE_ENABLE_UPDATE_DELETE_LIMIT) and is used together with LIMIT to control which specific rows are deleted. Turso supports LIMIT on DELETE but does not support ORDER BY in this context.
OFFSET on DELETE is parsed but not currently effective. While the syntax is accepted, the OFFSET value is ignored and all deletions start from the first matching row. Use a subquery with LIMIT and OFFSET in a SELECT if you need to skip rows before deleting.
INDEXED BY / NOT INDEXED hints are not supported on DELETE statements.
{INSERT | UPDATE | DELETE} ...
RETURNING {expr [[AS] column-alias] | *} [, ...]
The RETURNING clause is an optional clause that can be appended to INSERT, UPDATE, and DELETE statements. It causes the statement to return one result row for each database row that is inserted, updated, or deleted. This eliminates the need for a separate SELECT query to retrieve values that were generated or modified by the statement.
A common use case is retrieving auto-generated primary keys, computed default values, or confirming which rows were affected by an UPDATE or DELETE.
The RETURNING clause is not part of the SQL standard. It follows the syntax established by PostgreSQL.
The RETURNING keyword is followed by a comma-separated list of expressions, similar to the expressions that follow SELECT in a query. Each expression may reference columns of the table being modified, use literal values, call scalar functions, or combine these with operators.
Expressions in RETURNING can reference any column of the modified table, either unqualified or qualified with the table name:
RETURNING id, name
RETURNING orders.id, orders.name
For INSERT and UPDATE, column references reflect the values after the change has been applied. For DELETE, column references reflect the values of the row before it is removed.
The * expands into all columns of the table being modified:
INSERT INTO orders (...) VALUES (...) RETURNING *;
Each expression may optionally be followed by AS column-alias (or just column-alias without AS) to set the name of the result column:
RETURNING quantity * unit_price AS total
The following kinds of expressions are allowed in a RETURNING clause:
42, 'hello', NULL)id, table_name.column_name)price * 1.1, value > 0)first || ' ' || last)upper(name), round(price, 2), coalesce(a, b))CASE expressionsCAST expressionsIN, BETWEEN, LIKE, GLOB, IS NULL, IS NOT NULL operatorsrowid pseudo-columnTop-level aggregate functions (SUM, COUNT, AVG, etc.) and window functions are not permitted in RETURNING. Using them produces an error:
-- Error: aggregate functions not allowed in RETURNING
INSERT INTO t VALUES (1, 42) RETURNING SUM(value);
Returns one row for each inserted row. When inserting multiple rows, one result row is produced per input row. The returned values reflect the state after insertion, including auto-generated primary keys and evaluated DEFAULT expressions.
Returns one row for each row that was actually modified by the UPDATE. Column values in the result reflect the new values after the update. If the WHERE clause matches no rows, no result rows are produced.
Returns one row for each deleted row. Column values in the result reflect the values the row had before deletion. If the WHERE clause matches no rows, no result rows are produced.
When RETURNING is used with an INSERT ... ON CONFLICT statement, it returns rows for both the insert and update code paths. If the conflict resolution is DO NOTHING and a conflict occurs, no row is returned for that conflicting input row.
The order of rows returned by RETURNING is not guaranteed. It typically matches the order in which rows were processed, but applications should not rely on any particular ordering.
The RETURNING clause reports the direct changes made by the statement. It does not report additional changes caused by triggers or foreign key constraint actions.
-- Retrieve the auto-generated id after inserting a row
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer TEXT NOT NULL,
product TEXT NOT NULL,
quantity INTEGER NOT NULL,
unit_price REAL NOT NULL
);
INSERT INTO orders (customer, product, quantity, unit_price)
VALUES ('Alice', 'Widget', 5, 9.99)
RETURNING id;
-- 1
-- Return all columns of newly inserted rows
INSERT INTO orders (customer, product, quantity, unit_price)
VALUES ('Alice', 'Widget', 5, 9.99)
RETURNING *;
-- 1|Alice|Widget|5|9.99
-- Return a computed expression with an alias
INSERT INTO orders (customer, product, quantity, unit_price)
VALUES ('Alice', 'Widget', 5, 9.99)
RETURNING id, customer, quantity * unit_price AS total;
-- 1|Alice|49.95
-- Return multiple rows from a multi-row INSERT
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
active INTEGER DEFAULT 1
);
INSERT INTO users (name, email)
VALUES ('Alice', '[email protected]'),
('Bob', '[email protected]'),
('Charlie', '[email protected]')
RETURNING id, name;
-- 1|Alice
-- 2|Bob
-- 3|Charlie
-- Retrieve auto-filled DEFAULT values
CREATE TABLE events (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
created_at TEXT DEFAULT (datetime('now'))
);
INSERT INTO events (name) VALUES ('signup')
RETURNING id, name, created_at;
-- 1|signup|2026-02-11 18:52:48
-- See which rows were updated and their new values
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL
);
INSERT INTO products (name, price)
VALUES ('Laptop', 999.99), ('Mouse', 29.99), ('Keyboard', 79.99);
UPDATE products SET price = price * 0.9
RETURNING id, name, round(price, 2) AS discounted_price;
-- 1|Laptop|899.99
-- 2|Mouse|26.99
-- 3|Keyboard|71.99
-- Use a CASE expression in RETURNING
CREATE TABLE tasks (
id INTEGER PRIMARY KEY,
title TEXT NOT NULL,
done INTEGER DEFAULT 0
);
INSERT INTO tasks (title)
VALUES ('Write report'), ('Fix bug'), ('Review PR');
UPDATE tasks SET done = 1 WHERE id = 2
RETURNING id, title,
CASE WHEN done THEN 'completed' ELSE 'pending' END AS status;
-- 2|Fix bug|completed
-- Confirm which rows were deleted
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
active INTEGER DEFAULT 1
);
INSERT INTO users (name, email)
VALUES ('Alice', '[email protected]'),
('Bob', '[email protected]'),
('Charlie', '[email protected]');
DELETE FROM users WHERE active = 1
RETURNING id, name, email;
-- 1|Alice|[email protected]
-- 2|Bob|[email protected]
-- 3|Charlie|[email protected]
-- Use function expressions in DELETE RETURNING
CREATE TABLE logs (
id INTEGER PRIMARY KEY,
message TEXT NOT NULL,
level TEXT NOT NULL
);
INSERT INTO logs (message, level)
VALUES ('User login', 'info'), ('Disk full', 'error'), ('Timeout', 'warn');
DELETE FROM logs WHERE level = 'error'
RETURNING id, message, upper(level) AS level;
-- 2|Disk full|ERROR
-- RETURNING with UPSERT: insert path
CREATE TABLE settings (key TEXT PRIMARY KEY, value TEXT NOT NULL);
INSERT INTO settings (key, value) VALUES ('theme', 'dark')
ON CONFLICT(key) DO UPDATE SET value = excluded.value
RETURNING key, value;
-- theme|dark
-- RETURNING with UPSERT: update path (conflict triggers DO UPDATE)
CREATE TABLE settings (key TEXT PRIMARY KEY, value TEXT NOT NULL);
INSERT INTO settings (key, value) VALUES ('theme', 'dark');
INSERT INTO settings (key, value) VALUES ('theme', 'light')
ON CONFLICT(key) DO UPDATE SET value = excluded.value
RETURNING key, value;
-- theme|light
-- RETURNING with UPSERT: DO NOTHING returns no rows on conflict
CREATE TABLE settings (key TEXT PRIMARY KEY, value TEXT NOT NULL);
INSERT INTO settings (key, value) VALUES ('theme', 'dark');
INSERT INTO settings (key, value) VALUES ('theme', 'light')
ON CONFLICT DO NOTHING
RETURNING key, value;
-- (no rows returned)
-- RETURNING with INSERT ... SELECT
CREATE TABLE source (id INTEGER, name TEXT, score INTEGER);
CREATE TABLE archive (name TEXT, score INTEGER);
INSERT INTO source VALUES (1, 'Alice', 95), (2, 'Bob', 82), (3, 'Charlie', 78);
INSERT INTO archive SELECT name, score FROM source WHERE score >= 80
RETURNING *;
-- Alice|95
-- Bob|82
CREATE TABLE [IF NOT EXISTS] table-name (
column-def [, ...]
[, table-constraint [, ...]]
) [STRICT]
Where column-def is:
column-name [type-name] [column-constraint ...]
And column-constraint is one of:
PRIMARY KEY [ASC | DESC] [AUTOINCREMENT] [conflict-clause]
NOT NULL [conflict-clause]
UNIQUE [conflict-clause]
CHECK (expr)
DEFAULT {value | (expr)}
COLLATE {BINARY | NOCASE | RTRIM}
REFERENCES foreign-table (foreign-column [, ...]) [foreign-key-action ...]
[CONSTRAINT constraint-name] column-constraint
And table-constraint is one of:
PRIMARY KEY (column-name [, ...]) [conflict-clause]
UNIQUE (column-name [, ...]) [conflict-clause]
CHECK (expr)
FOREIGN KEY (column-name [, ...]) REFERENCES foreign-table (foreign-column [, ...]) [foreign-key-action ...]
And conflict-clause is:
ON CONFLICT {ROLLBACK | ABORT | FAIL | IGNORE | REPLACE}
The CREATE TABLE statement creates a new table in the database. Each table has a name, a list of column definitions, and optional table-level constraints. Table names beginning with sqlite_ are reserved for internal use and cannot be created by user SQL.
Every ordinary table in Turso has an implicit 64-bit signed integer key called the rowid. The rowid uniquely identifies each row within the table and provides fast lookup. You can access it using the names rowid, _rowid_, or oid, unless one of those names is used as an explicit column name.
When IF NOT EXISTS is specified, the statement is a no-op if a table with the same name already exists. Without this clause, attempting to create a table with an existing name produces an error.
CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT);
Each column definition specifies a column name and an optional declared type. The declared type determines the column’s type affinity , which influences how values are coerced on insertion. See Type Conversions for the full affinity rules.
Turso uses dynamic typing. A column’s declared type does not restrict what values can be stored in it (unless the table uses STRICT mode). Any column can hold any storage class: NULL, INTEGER, REAL, TEXT, or BLOB.
The DEFAULT clause specifies a value to use when an INSERT statement omits a column. If no DEFAULT clause is specified, the default value is NULL.
The default value can be:
| Form | Description |
|---|---|
NULL | Null value. |
| Signed number | A literal integer or real number, optionally with a + or - prefix. |
| String literal | A single-quoted string. |
TRUE / FALSE | Boolean literals (stored as 1 and 0). |
CURRENT_TIME | Current time as HH:MM:SS. |
CURRENT_DATE | Current date as YYYY-MM-DD. |
CURRENT_TIMESTAMP | Current date and time as YYYY-MM-DD HH:MM:SS. |
(expr) | An expression in parentheses, evaluated at insert time. |
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL DEFAULT 'Unknown',
price REAL DEFAULT 0.0,
in_stock INTEGER DEFAULT TRUE,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
Each table may have at most one primary key. A primary key can be declared as a column constraint (single-column) or as a table constraint (composite).
Single-column primary key:
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
Composite primary key (table constraint):
CREATE TABLE enrollment (
student_id INTEGER,
course_id INTEGER,
grade TEXT,
PRIMARY KEY (student_id, course_id)
);
Attempting to insert a duplicate primary key value produces a constraint violation error.
When a single column is declared with the exact type name INTEGER and is the PRIMARY KEY, that column becomes an alias for the rowid. This is a special case:
INTEGER – not INT, BIGINT, SMALLINT, or any other variation.NULL into an INTEGER PRIMARY KEY column automatically assigns the next available rowid.CREATE TABLE events (
id INTEGER PRIMARY KEY,
description TEXT
);
INSERT INTO events (description) VALUES ('Server started');
-- id is automatically assigned (e.g., 1)
INSERT INTO events (id, description) VALUES (NULL, 'User login');
-- id is automatically assigned (e.g., 2)
The AUTOINCREMENT keyword can only be used with INTEGER PRIMARY KEY. It modifies the automatic rowid assignment to guarantee that automatically-assigned rowids are never reused, even after rows are deleted.
Without AUTOINCREMENT, Turso reuses deleted rowid values. With AUTOINCREMENT, Turso tracks the largest rowid ever inserted in the internal sqlite_sequence table, and new auto-assigned rowids are always greater than any previously used value.
CREATE TABLE audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
action TEXT NOT NULL
);
INSERT INTO audit_log (action) VALUES ('create');
INSERT INTO audit_log (action) VALUES ('update');
DELETE FROM audit_log WHERE id = 2;
INSERT INTO audit_log (action) VALUES ('delete');
-- The new row gets id=3, not id=2
AUTOINCREMENT has a small performance cost because it requires reading and writing the sqlite_sequence table on each insert. Use it only when strictly monotonically increasing rowids are required.
If the maximum rowid value (9223372036854775807) has been used, inserting a new row with AUTOINCREMENT produces an error rather than attempting to find an unused rowid.
The NOT NULL constraint prevents a column from containing NULL values. Attempting to insert or update a NULL into a NOT NULL column produces a constraint violation error.
CREATE TABLE contacts (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL,
phone TEXT
);
The UNIQUE constraint ensures that all values in a column (or combination of columns) are distinct. Multiple UNIQUE constraints can appear on the same table.
CREATE TABLE accounts (
id INTEGER PRIMARY KEY,
username TEXT UNIQUE,
email TEXT UNIQUE
);
A UNIQUE table constraint can span multiple columns:
CREATE TABLE assignments (
employee_id INTEGER,
project_id INTEGER,
role TEXT,
UNIQUE (employee_id, project_id)
);
Note: NULL values are considered distinct from each other for UNIQUE constraint purposes. Multiple rows may have NULL in a UNIQUE column without violating the constraint.
The CHECK constraint specifies an expression that must evaluate to a non-zero (true) value for every row. If the expression evaluates to zero, the insert or update is rejected. NULL values pass CHECK constraints (NULL is neither true nor false).
CHECK can be used as a column constraint or a table constraint:
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
age INTEGER CHECK (age >= 18),
salary INTEGER CHECK (salary > 0),
CHECK (age <= 120)
);
CHECK expressions may reference multiple columns, use functions, and include operators like IN, BETWEEN, LIKE, and CASE:
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
status TEXT CHECK (status IN ('pending', 'active', 'shipped', 'delivered')),
quantity INTEGER,
unit_price INTEGER,
total INTEGER,
CHECK (total = quantity * unit_price)
);
CHECK expressions cannot contain subqueries, aggregate functions, or bind parameters. Referencing a non-existent column in a CHECK expression produces an error at table creation time.
The COLLATE clause on a column definition sets the default collation sequence for that column. This affects comparisons, sorting, and UNIQUE/PRIMARY KEY constraints.
| Collation | Behavior |
|---|---|
BINARY | Compares bytes directly (default). |
NOCASE | Case-insensitive comparison for ASCII characters. |
RTRIM | Like BINARY, but trailing spaces are ignored. |
CREATE TABLE tags (
id INTEGER PRIMARY KEY,
name TEXT COLLATE NOCASE UNIQUE
);
Foreign key constraints enforce referential integrity between tables. A foreign key in a child table references a column (or columns) in a parent table, ensuring that every value in the child column exists in the parent column.
Foreign key enforcement must be enabled with PRAGMA foreign_keys = ON (it is off by default).
Column-level syntax:
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer_id INTEGER REFERENCES customers (id)
);
Table-level syntax:
CREATE TABLE order_items (
id INTEGER PRIMARY KEY,
order_id INTEGER,
product_id INTEGER,
FOREIGN KEY (order_id) REFERENCES orders (id),
FOREIGN KEY (product_id) REFERENCES products (id)
);
Foreign key actions specify what happens when the referenced row in the parent table is deleted or updated:
FOREIGN KEY (column) REFERENCES parent (column)
[ON DELETE {SET NULL | CASCADE | RESTRICT | NO ACTION}]
[ON UPDATE {SET NULL | CASCADE | RESTRICT | NO ACTION}]
| Action | Behavior |
|---|---|
NO ACTION | Reject the change if child rows exist (default). |
RESTRICT | Same as NO ACTION, but checked immediately rather than deferred. |
CASCADE | Delete or update the child rows to match the parent change. |
SET NULL | Set the child foreign key column(s) to NULL. |
PRAGMA foreign_keys = ON;
CREATE TABLE departments (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE staff (
id INTEGER PRIMARY KEY,
dept_id INTEGER REFERENCES departments (id) ON DELETE CASCADE
);
The PRIMARY KEY, NOT NULL, and UNIQUE constraints accept an optional ON CONFLICT clause that specifies how constraint violations are handled:
| Algorithm | Behavior |
|---|---|
ABORT | Abort the current statement and roll back its changes (default). |
ROLLBACK | Abort the current statement and roll back the entire transaction. |
FAIL | Abort the current statement but keep changes from earlier rows. |
IGNORE | Skip the row that caused the violation and continue. |
REPLACE | For UNIQUE/PRIMARY KEY: delete the conflicting row, then insert. For NOT NULL: replace the NULL with the column’s default value. |
CREATE TABLE settings (
key TEXT PRIMARY KEY ON CONFLICT REPLACE,
value TEXT NOT NULL
);
Note: CHECK constraints do not accept an ON CONFLICT clause in the column definition. The conflict resolution for CHECK violations is always ABORT by default, but can be overridden per-statement using INSERT OR IGNORE, INSERT OR REPLACE, etc.
The STRICT keyword at the end of the column list enables strict type checking. In a STRICT table, every column must have a declared type, and the type must be one of: INTEGER, REAL, TEXT, BLOB, or ANY.
When inserting or updating values, Turso rejects values that do not match the declared type (with some coercion: numeric strings are accepted for INTEGER and REAL columns). Columns without a NOT NULL constraint still accept NULL values.
CREATE TABLE measurements (
id INTEGER PRIMARY KEY,
sensor_name TEXT,
value REAL,
recorded_at TEXT
) STRICT;
CREATE TABLE users (
id INTEGER PRIMARY KEY,
username TEXT NOT NULL UNIQUE,
email TEXT NOT NULL UNIQUE,
display_name TEXT DEFAULT 'Anonymous',
age INTEGER CHECK (age >= 13)
);
INSERT INTO users (id, username, email, age)
VALUES (1, 'alice', '[email protected]', 30);
SELECT * FROM users;
-- 1|alice|[email protected]|Anonymous|30
CREATE TABLE enrollment (
student_id INTEGER,
course_id INTEGER,
enrolled_at TEXT DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (student_id, course_id)
);
INSERT INTO enrollment (student_id, course_id) VALUES (1, 101);
INSERT INTO enrollment (student_id, course_id) VALUES (1, 102);
INSERT INTO enrollment (student_id, course_id) VALUES (2, 101);
SELECT student_id, course_id FROM enrollment ORDER BY student_id, course_id;
-- 1|101
-- 1|102
-- 2|101
CREATE TABLE audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
action TEXT NOT NULL
);
INSERT INTO audit_log (action) VALUES ('user.login');
INSERT INTO audit_log (action) VALUES ('user.logout');
SELECT * FROM audit_log;
-- 1|user.login
-- 2|user.logout
CREATE TABLE accounts (
id INTEGER PRIMARY KEY,
username TEXT CHECK (length(username) >= 3 AND length(username) <= 20),
email TEXT CHECK (email LIKE '%@%.%')
);
INSERT INTO accounts VALUES (1, 'alice', '[email protected]');
SELECT * FROM accounts;
-- 1|alice|[email protected]
PRAGMA foreign_keys = ON;
CREATE TABLE departments (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
dept_id INTEGER REFERENCES departments (id) ON DELETE CASCADE
);
INSERT INTO departments VALUES (1, 'Engineering'), (2, 'Sales');
INSERT INTO employees VALUES (1, 'Alice', 1), (2, 'Bob', 1), (3, 'Carol', 2);
DELETE FROM departments WHERE id = 1;
SELECT * FROM employees;
-- 3|Carol|2
CREATE TABLE inventory (
id INTEGER PRIMARY KEY,
item_name TEXT,
quantity INTEGER
) STRICT;
INSERT INTO inventory VALUES (1, 'Bolts', 500);
SELECT * FROM inventory;
-- 1|Bolts|500
CREATE TABLE tasks (
id INTEGER PRIMARY KEY,
title TEXT NOT NULL,
status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'active', 'done')),
priority INTEGER DEFAULT (1 + 1)
);
INSERT INTO tasks (id, title) VALUES (1, 'Review pull request');
SELECT * FROM tasks;
-- 1|Review pull request|pending|2
CREATE TABLE config (key TEXT PRIMARY KEY, value TEXT);
CREATE TABLE IF NOT EXISTS config (key TEXT PRIMARY KEY, value TEXT);
-- No error on the second statement
CREATE TABLE kv_store (
key TEXT PRIMARY KEY ON CONFLICT REPLACE,
value TEXT NOT NULL
);
INSERT INTO kv_store VALUES ('theme', 'light');
INSERT INTO kv_store VALUES ('theme', 'dark');
SELECT * FROM kv_store;
-- dark
CREATE TABLE followed by INSERT ... SELECT instead.CREATE TEMP TABLE) are not supported.SET NULL or CASCADE instead.CREATE [UNIQUE] INDEX [IF NOT EXISTS] index-name
ON table-name (indexed-column [, ...])
[WHERE expr]
Where indexed-column is:
{column-name | (expr)} [COLLATE collation-name] [ASC | DESC]
To remove an index:
DROP INDEX [IF EXISTS] index-name
The CREATE INDEX statement creates a new index on one or more columns of an existing table. Indexes speed up queries that filter, sort, or join on the indexed columns, at the cost of additional storage and slightly slower writes.
An index does not change the logical content of a table. Queries produce the same results whether or not an index exists. Turso automatically decides whether to use an available index when executing a query. You can use EXPLAIN to see whether a query plan uses a particular index.
The DROP INDEX statement removes an index from the database. Dropping an index has no effect on the table data.
When the UNIQUE keyword appears between CREATE and INDEX, the index enforces a uniqueness constraint on the indexed columns. Any attempt to insert or update a row that would create a duplicate entry in the indexed columns produces a UNIQUE constraint failed error.
NULL values are considered distinct from each other for uniqueness purposes. Multiple rows may contain NULL in a UNIQUE index column without violating the constraint.
CREATE UNIQUE INDEX idx_users_email ON users(email);
-- Two rows with email = '[email protected]' would be rejected.
-- Two rows with email = NULL are allowed.
When IF NOT EXISTS is specified, the statement is a no-op if an index with the same name already exists. Without this clause, attempting to create an index whose name is already in use produces an error.
CREATE INDEX IF NOT EXISTS idx_products_name ON products(name);
-- Safe to run repeatedly without error.
When IF EXISTS is included in a DROP INDEX statement, no error is raised if the named index does not exist. Without this clause, dropping a nonexistent index produces an error.
DROP INDEX IF EXISTS idx_old_report;
Each indexed column entry in the column list is either a column name or a parenthesized expression. Multiple columns or expressions can be listed, separated by commas, to create a composite (multi-column) index.
The order of columns in a composite index matters. An index on (state, city) is most useful for queries that filter on state, or on both state and city, but provides little benefit for queries that filter only on city.
Each indexed column or expression may include a COLLATE clause specifying the collation sequence used for text comparisons within the index. If omitted, the collation defaults to the one defined on the column in CREATE TABLE, or BINARY if none was specified.
| Collation | Behavior |
|---|---|
BINARY | Compares bytes directly (default). |
NOCASE | Case-insensitive comparison for ASCII characters. |
RTRIM | Like BINARY, but trailing spaces are ignored. |
For a query to use an index, the collation in the query’s comparison must match the collation of the index. If a column is defined as COLLATE NOCASE but the index uses COLLATE BINARY, queries that rely on case-insensitive matching will not use that index.
CREATE INDEX idx_contacts_name ON contacts(last_name COLLATE NOCASE);
Each indexed column or expression may be followed by ASC (ascending, the default) or DESC (descending) to specify the sort order stored in the index. Descending indexes are useful for queries that sort in descending order, allowing the index to be scanned in its natural order rather than reversed.
CREATE INDEX idx_transactions_recent ON transactions(account_id, created_at DESC);
When a WHERE clause is appended to CREATE INDEX, the result is a partial index. Only rows that satisfy the WHERE expression are included in the index. This reduces index size and write overhead for tables where queries consistently filter on a known condition.
The WHERE expression in a partial index has the following restrictions:
random()).For Turso to use a partial index when executing a query, the query’s WHERE clause must imply the index’s WHERE clause. In the simplest case, the query includes the same condition as the index.
CREATE INDEX idx_orders_pending ON orders(customer_id)
WHERE status = 'pending';
-- Only rows with status = 'pending' are indexed.
Instead of a plain column name, an indexed column entry may be a parenthesized expression. This is useful for indexing computed values, such as the lowercase version of a text column.
Expression indexes have the following restrictions:
random(), last_insert_rowid(), etc.).CREATE INDEX idx_users_email_lower ON users(lower(email));
-- Speeds up queries like: SELECT * FROM users WHERE lower(email) = '[email protected]';
CREATE TABLE employees (
id INTEGER PRIMARY KEY,
name TEXT,
department TEXT,
salary REAL
);
CREATE INDEX idx_employees_name ON employees(name);
CREATE TABLE users (
id INTEGER PRIMARY KEY,
email TEXT NOT NULL
);
CREATE UNIQUE INDEX idx_users_email ON users(email);
INSERT INTO users VALUES (1, '[email protected]');
INSERT INTO users VALUES (2, '[email protected]');
-- INSERT INTO users VALUES (3, '[email protected]');
-- Error: UNIQUE constraint failed: users.email
CREATE TABLE shipments (
id INTEGER PRIMARY KEY,
customer_id INTEGER,
city TEXT,
state TEXT,
zip TEXT
);
CREATE INDEX idx_shipments_location ON shipments(state, city, zip);
-- Useful for queries that filter by state, or by state + city, or by all three.
CREATE TABLE transactions (
id INTEGER PRIMARY KEY,
account_id INTEGER,
created_at TEXT,
amount REAL
);
CREATE INDEX idx_transactions_recent ON transactions(account_id, created_at DESC);
INSERT INTO transactions VALUES (1, 100, '2025-01-15', 250.00);
INSERT INTO transactions VALUES (2, 100, '2025-03-20', 75.00);
INSERT INTO transactions VALUES (3, 100, '2025-06-01', 300.00);
-- The index stores rows per account with the most recent date first.
SELECT created_at, amount FROM transactions
WHERE account_id = 100
ORDER BY created_at DESC;
-- 2025-06-01|300.0
-- 2025-03-20|75.0
-- 2025-01-15|250.0
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer_id INTEGER,
status TEXT,
amount REAL
);
CREATE INDEX idx_orders_pending ON orders(customer_id, amount)
WHERE status = 'pending';
INSERT INTO orders VALUES (1, 10, 'pending', 99.99);
INSERT INTO orders VALUES (2, 10, 'shipped', 149.99);
INSERT INTO orders VALUES (3, 20, 'pending', 49.99);
-- This query can use the partial index because its WHERE clause
-- includes the index's condition.
SELECT id, customer_id, amount FROM orders
WHERE status = 'pending'
ORDER BY amount DESC;
-- 1|10|99.99
-- 3|20|49.99
CREATE TABLE users (
id INTEGER PRIMARY KEY,
first_name TEXT,
last_name TEXT,
email TEXT
);
CREATE INDEX idx_users_email_lower ON users(lower(email));
INSERT INTO users VALUES (1, 'Alice', 'Smith', '[email protected]');
INSERT INTO users VALUES (2, 'Bob', 'Jones', '[email protected]');
-- The expression index speeds up case-insensitive email lookups.
SELECT first_name, email FROM users
WHERE lower(email) = '[email protected]';
-- Alice|[email protected]
CREATE TABLE contacts (
id INTEGER PRIMARY KEY,
first_name TEXT,
last_name TEXT
);
CREATE INDEX idx_contacts_fullname
ON contacts(first_name COLLATE NOCASE, last_name COLLATE NOCASE);
-- Queries using case-insensitive comparisons on these columns can use this index.
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL
);
CREATE INDEX IF NOT EXISTS idx_products_name ON products(name);
-- Running the same statement again does not produce an error.
CREATE INDEX IF NOT EXISTS idx_products_name ON products(name);
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL
);
CREATE INDEX idx_products_name ON products(name);
-- Remove the index
DROP INDEX idx_products_name;
-- Safe to run even if the index does not exist
DROP INDEX IF EXISTS idx_products_name;
CREATE TABLE users (
id INTEGER PRIMARY KEY,
email TEXT
);
CREATE UNIQUE INDEX idx_users_email ON users(email);
-- Multiple NULLs are allowed because NULL values are considered distinct.
INSERT INTO users VALUES (1, NULL);
INSERT INTO users VALUES (2, NULL);
SELECT * FROM users;
-- 1|
-- 2|
CREATE [TEMP | TEMPORARY] VIEW [IF NOT EXISTS] view-name
AS select-stmt
DROP VIEW [IF EXISTS] view-name
The CREATE VIEW statement assigns a name to a pre-packaged SELECT statement. Once created, a view can be used anywhere a table name is accepted in the FROM clause of a SELECT: in simple queries, JOINs, subqueries, and even in the definitions of other views.
Views are read-only. You cannot use INSERT, UPDATE, or DELETE on a view. A view does not store data; every time it is referenced in a query, Turso expands it into the underlying SELECT and executes it against the current table data. This means the results of a view always reflect the latest state of the base tables.
The DROP VIEW statement removes a view definition from the database. It has no effect on the underlying tables or their data.
The AS keyword is followed by any valid SELECT statement. This SELECT defines what the view returns when queried. The column names of the view are derived from the result columns of the SELECT. Use the AS alias syntax in the SELECT to give view columns well-defined names, particularly when the result includes expressions or function calls.
-- Without aliases, computed columns get auto-generated names
CREATE VIEW order_stats AS
SELECT customer_id, COUNT(*), SUM(amount)
FROM orders
GROUP BY customer_id;
-- With aliases, column names are explicit and predictable
CREATE VIEW order_stats AS
SELECT customer_id,
COUNT(*) AS order_count,
SUM(amount) AS total_spent
FROM orders
GROUP BY customer_id;
If the TEMP or TEMPORARY keyword appears between CREATE and VIEW, the view is only visible to the current database connection and is automatically deleted when the connection is closed. Temporary views are useful for intermediate results within a session.
CREATE TEMP VIEW recent_orders AS
SELECT * FROM orders WHERE created_at > '2024-01-01';
When IF NOT EXISTS is included, the statement does not produce a fatal error if a view with the same name already exists. Without this clause, attempting to create a view that already exists raises an error.
When IF EXISTS is included in a DROP VIEW statement, no error is raised if the named view does not exist. Without this clause, dropping a nonexistent view produces an error.
-- Safe to run even if the view does not exist
DROP VIEW IF EXISTS old_report;
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
category TEXT
);
INSERT INTO products VALUES (1, 'Laptop', 999.99, 'Electronics');
INSERT INTO products VALUES (2, 'Headphones', 49.99, 'Electronics');
INSERT INTO products VALUES (3, 'Notebook', 5.99, 'Office');
INSERT INTO products VALUES (4, 'Pen', 1.99, 'Office');
-- Create a view that filters to one category
CREATE VIEW electronics AS
SELECT id, name, price
FROM products
WHERE category = 'Electronics';
SELECT * FROM electronics;
-- 1|Laptop|999.99
-- 2|Headphones|49.99
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
quantity INTEGER
);
INSERT INTO products VALUES (1, 'Laptop', 999.99, 5);
INSERT INTO products VALUES (2, 'Mouse', 29.99, 50);
CREATE VIEW inventory_value AS
SELECT name, price, quantity, price * quantity AS total_value
FROM products;
SELECT * FROM inventory_value;
-- Laptop|999.99|5|4999.95
-- Mouse|29.99|50|1499.5
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer_id INTEGER,
product TEXT,
amount REAL
);
INSERT INTO orders VALUES (1, 10, 'Laptop', 999.99);
INSERT INTO orders VALUES (2, 10, 'Mouse', 29.99);
INSERT INTO orders VALUES (3, 20, 'Keyboard', 79.99);
INSERT INTO orders VALUES (4, 20, 'Monitor', 399.99);
INSERT INTO orders VALUES (5, 20, 'Cable', 9.99);
CREATE VIEW customer_totals AS
SELECT customer_id,
COUNT(*) AS order_count,
SUM(amount) AS total_spent
FROM orders
GROUP BY customer_id;
SELECT * FROM customer_totals;
-- 10|2|1029.98
-- 20|3|489.97
CREATE TABLE customers (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER, amount REAL);
INSERT INTO customers VALUES (1, 'Alice');
INSERT INTO customers VALUES (2, 'Bob');
INSERT INTO orders VALUES (1, 1, 100.00);
INSERT INTO orders VALUES (2, 1, 200.00);
INSERT INTO orders VALUES (3, 2, 150.00);
CREATE VIEW order_details AS
SELECT c.name, o.amount
FROM customers c
JOIN orders o ON c.id = o.customer_id;
SELECT * FROM order_details;
-- Alice|100.0
-- Alice|200.0
-- Bob|150.0
A view can be used just like a table in a SELECT. You can apply WHERE, ORDER BY, LIMIT, and any other clause on top of it.
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
category TEXT
);
INSERT INTO products VALUES (1, 'Laptop', 999.99, 'Electronics');
INSERT INTO products VALUES (2, 'Mouse', 29.99, 'Electronics');
INSERT INTO products VALUES (3, 'Notebook', 5.99, 'Office');
CREATE VIEW electronics AS
SELECT id, name, price
FROM products
WHERE category = 'Electronics';
-- Filter the view further
SELECT * FROM electronics WHERE price > 50;
-- 1|Laptop|999.99
-- Sort the view results
SELECT * FROM electronics ORDER BY price DESC;
-- 1|Laptop|999.99
-- 2|Mouse|29.99
CREATE TABLE customers (id INTEGER PRIMARY KEY, name TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER, amount REAL);
INSERT INTO customers VALUES (1, 'Alice');
INSERT INTO customers VALUES (2, 'Bob');
INSERT INTO orders VALUES (1, 1, 100.00);
INSERT INTO orders VALUES (2, 1, 200.00);
INSERT INTO orders VALUES (3, 2, 150.00);
CREATE VIEW customer_totals AS
SELECT customer_id, SUM(amount) AS total
FROM orders
GROUP BY customer_id;
SELECT c.name, ct.total
FROM customers c
JOIN customer_totals ct ON c.id = ct.customer_id
ORDER BY ct.total DESC;
-- Alice|300.0
-- Bob|150.0
Views can be built on top of other views.
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
category TEXT
);
INSERT INTO products VALUES (1, 'Laptop', 999.99, 'Electronics');
INSERT INTO products VALUES (2, 'Mouse', 29.99, 'Electronics');
INSERT INTO products VALUES (3, 'Notebook', 5.99, 'Office');
CREATE VIEW electronics AS
SELECT id, name, price
FROM products
WHERE category = 'Electronics';
CREATE VIEW expensive_electronics AS
SELECT * FROM electronics WHERE price > 100;
SELECT * FROM expensive_electronics;
-- 1|Laptop|999.99
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price REAL,
category TEXT
);
INSERT INTO products VALUES (1, 'Laptop', 999.99, 'Electronics');
INSERT INTO products VALUES (2, 'Mouse', 29.99, 'Electronics');
INSERT INTO products VALUES (3, 'Notebook', 5.99, 'Office');
CREATE VIEW electronics AS
SELECT id, name, price
FROM products
WHERE category = 'Electronics';
SELECT name FROM products WHERE id IN (SELECT id FROM electronics);
-- Laptop
-- Mouse
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
INSERT INTO products VALUES (1, 'Laptop', 999.99);
INSERT INTO products VALUES (2, 'Pen', 1.99);
CREATE TEMP VIEW cheap_products AS
SELECT id, name, price
FROM products
WHERE price < 100;
SELECT * FROM cheap_products;
-- 2|Pen|1.99
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT);
CREATE VIEW all_products AS SELECT * FROM products;
-- Drop the view
DROP VIEW all_products;
-- Safe to run even if the view does not exist
DROP VIEW IF EXISTS all_products;
Attempting to INSERT, UPDATE, or DELETE against a view produces an error.
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
INSERT INTO products VALUES (1, 'Laptop', 999.99);
CREATE VIEW all_products AS SELECT * FROM products;
-- All of the following produce errors:
-- INSERT INTO all_products VALUES (2, 'Mouse', 29.99);
-- UPDATE all_products SET price = 500 WHERE id = 1;
-- DELETE FROM all_products WHERE id = 1;
Turso supports CREATE VIEW with the same syntax as SQLite, with the following notes:
Column name list : The parenthesized column name list after the view name (e.g., CREATE VIEW v(a, b) AS SELECT ...) is parsed but the specified column names are not applied to the view output. Use AS aliases in the SELECT statement instead to control column names.
IF NOT EXISTS : The IF NOT EXISTS clause is accepted by the parser. When a view with the same name already exists, a diagnostic message is emitted but execution continues.
CREATE TRIGGER [IF NOT EXISTS] trigger-name
[BEFORE | AFTER] {DELETE | INSERT | UPDATE [OF column-name [, ...]]}
ON table-name
[FOR EACH ROW]
[WHEN expr]
BEGIN
statement; [statement; ...]
END;
The CREATE TRIGGER statement defines a trigger – a set of SQL statements that automatically execute in response to INSERT, UPDATE, or DELETE operations on a specified table. Triggers are useful for enforcing business rules, maintaining audit logs, synchronizing related tables, and computing derived values.
Turso supports BEFORE and AFTER triggers on tables. Each trigger fires once per affected row (FOR EACH ROW semantics). The trigger body can contain one or more INSERT, UPDATE, DELETE, or SELECT statements, which execute as part of the same transaction as the triggering statement.
Triggers require the --experimental-triggers flag when starting the Turso CLI.
The optional timing keyword controls when the trigger body executes relative to the triggering operation.
| Timing | Behavior |
|---|---|
BEFORE | Executes before the row is modified. Default if no timing is specified. |
AFTER | Executes after the row has been modified. |
When no timing keyword is provided, BEFORE is used by default.
The event determines which data modification operation causes the trigger to fire.
| Event | Description |
|---|---|
INSERT | Fires when a new row is inserted into the table. |
UPDATE | Fires when any column of a row is updated. |
UPDATE OF column-name [, ...] | Fires only when one of the specified columns appears in the SET clause of an UPDATE statement. |
DELETE | Fires when a row is deleted from the table. |
For UPDATE OF, the trigger fires based on which columns appear in the SET clause, not on whether the column value actually changes. Column names that do not exist in the table are silently ignored.
Turso supports only row-level triggers. The FOR EACH ROW clause is optional and has no effect – row-level semantics are always used. The trigger body executes once for each row affected by the triggering statement.
The optional WHEN clause provides a condition that is evaluated for each row. If the condition evaluates to false or NULL, the trigger body is skipped for that row. The WHEN expression can reference NEW and OLD row values (see below).
Inside a trigger body and WHEN clause, the special table references NEW and OLD provide access to the row values being modified.
| Event | NEW | OLD |
|---|---|---|
INSERT | The row being inserted. | Not available. |
UPDATE | The row after the update. | The row before the update. |
DELETE | Not available. | The row being deleted. |
Use dot notation to reference individual columns: NEW.column_name or OLD.column_name. Both NEW.rowid and named rowid aliases (e.g., NEW.id for an INTEGER PRIMARY KEY column) are supported.
When IF NOT EXISTS is included, Turso silently does nothing if a trigger with the same name already exists. Without this clause, attempting to create a trigger whose name is already in use results in an error.
The trigger body is enclosed between BEGIN and END and contains one or more SQL statements, each terminated by a semicolon. The supported statement types are:
INSERTUPDATEDELETESELECTAll statements in the trigger body execute within the same transaction as the triggering statement. If any statement in the trigger body fails, the entire triggering operation is rolled back.
When multiple triggers are defined on the same table for the same event and timing, they fire in reverse order of creation (last created fires first).
A trigger’s body can cause other triggers to fire. For example, an AFTER INSERT trigger on table A that inserts into table B will fire any insert triggers on table B.
Triggers can also be recursive – a trigger can modify the same table that caused it to fire. However, recursive triggers do not fire recursively by default; a trigger that fires once will not fire itself again in the same chain. This means a recursive AFTER INSERT trigger that inserts into the same table will execute at most one additional level.
Triggers are automatically dropped when the table they are attached to is dropped. To manually remove a trigger, use DROP TRIGGER:
DROP TRIGGER [IF EXISTS] trigger-name;
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
CREATE TABLE audit_log (id INTEGER PRIMARY KEY, action TEXT, product_name TEXT);
-- Log every new product insertion
CREATE TRIGGER log_product_insert AFTER INSERT ON products
BEGIN
INSERT INTO audit_log (action, product_name) VALUES ('INSERT', NEW.name);
END;
INSERT INTO products (name, price) VALUES ('Laptop', 999.99);
INSERT INTO products (name, price) VALUES ('Mouse', 29.99);
SELECT * FROM audit_log;
-- 1|INSERT|Laptop
-- 2|INSERT|Mouse
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, salary REAL);
CREATE TABLE salary_changes (id INTEGER PRIMARY KEY, old_salary REAL, new_salary REAL);
CREATE TRIGGER log_salary_update AFTER UPDATE ON employees
BEGIN
INSERT INTO salary_changes (old_salary, new_salary)
VALUES (OLD.salary, NEW.salary);
END;
INSERT INTO employees VALUES (1, 'Alice', 75000.0);
UPDATE employees SET salary = 80000.0 WHERE id = 1;
SELECT * FROM salary_changes;
-- 1|75000.0|80000.0
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer TEXT, total REAL);
CREATE TABLE deleted_orders (id INTEGER PRIMARY KEY, customer TEXT, total REAL);
CREATE TRIGGER archive_deleted_order BEFORE DELETE ON orders
BEGIN
INSERT INTO deleted_orders (id, customer, total) VALUES (OLD.id, OLD.customer, OLD.total);
END;
INSERT INTO orders VALUES (1, 'Bob', 150.00);
INSERT INTO orders VALUES (2, 'Carol', 250.00);
DELETE FROM orders WHERE customer = 'Bob';
SELECT * FROM deleted_orders;
-- 1|Bob|150.0
CREATE TABLE sensor_readings (id INTEGER PRIMARY KEY, value INTEGER);
CREATE TABLE alerts (id INTEGER PRIMARY KEY, reading_id INTEGER);
-- Only log an alert when the reading exceeds a threshold
CREATE TRIGGER high_value_alert AFTER INSERT ON sensor_readings
WHEN NEW.value > 100
BEGIN
INSERT INTO alerts (reading_id) VALUES (NEW.id);
END;
INSERT INTO sensor_readings VALUES (1, 50);
INSERT INTO sensor_readings VALUES (2, 150);
INSERT INTO sensor_readings VALUES (3, 75);
SELECT * FROM alerts;
-- 1|2
CREATE TABLE accounts (id INTEGER PRIMARY KEY, email TEXT, password_hash TEXT);
CREATE TABLE security_log (id INTEGER PRIMARY KEY, msg TEXT);
-- Only fire when the password column is updated
CREATE TRIGGER log_password_change AFTER UPDATE OF password_hash ON accounts
BEGIN
INSERT INTO security_log (msg) VALUES ('password changed for account ' || NEW.id);
END;
INSERT INTO accounts VALUES (1, '[email protected]', 'hash1');
UPDATE accounts SET email = '[email protected]' WHERE id = 1;
UPDATE accounts SET password_hash = 'hash2' WHERE id = 1;
SELECT * FROM security_log;
-- 1|password changed for account 1
CREATE TABLE inventory (id INTEGER PRIMARY KEY, product TEXT, quantity INTEGER);
CREATE TABLE restock_log (id INTEGER PRIMARY KEY, product TEXT);
CREATE TABLE quantity_log (id INTEGER PRIMARY KEY, product TEXT, qty INTEGER);
CREATE TRIGGER track_inventory AFTER INSERT ON inventory
BEGIN
INSERT INTO restock_log (product) VALUES (NEW.product);
INSERT INTO quantity_log (product, qty) VALUES (NEW.product, NEW.quantity);
END;
INSERT INTO inventory VALUES (1, 'Widget', 100);
SELECT * FROM restock_log;
-- 1|Widget
SELECT * FROM quantity_log;
-- 1|Widget|100
CREATE TABLE departments (id INTEGER PRIMARY KEY, budget INTEGER);
CREATE TABLE projects (id INTEGER PRIMARY KEY, dept_id INTEGER, cost INTEGER);
CREATE TABLE notifications (id INTEGER PRIMARY KEY, msg TEXT);
INSERT INTO departments VALUES (1, 50000);
-- When a project is added, update department budget
CREATE TRIGGER deduct_budget AFTER INSERT ON projects
BEGIN
UPDATE departments SET budget = budget - NEW.cost WHERE id = NEW.dept_id;
END;
-- When budget changes, log a notification
CREATE TRIGGER budget_notification AFTER UPDATE ON departments
BEGIN
INSERT INTO notifications (msg) VALUES ('budget updated: ' || NEW.budget);
END;
INSERT INTO projects VALUES (1, 1, 10000);
SELECT * FROM departments;
-- 1|40000
SELECT * FROM notifications;
-- 1|budget updated: 40000
CREATE TABLE transactions (id INTEGER PRIMARY KEY, amount INTEGER, status TEXT DEFAULT 'pending');
-- Automatically mark large transactions as requiring review
CREATE TRIGGER flag_large_transactions AFTER INSERT ON transactions
WHEN NEW.amount > 500 AND NEW.amount < 10000
BEGIN
UPDATE transactions SET status = 'review' WHERE id = NEW.id;
END;
INSERT INTO transactions (id, amount) VALUES (1, 100);
INSERT INTO transactions (id, amount) VALUES (2, 750);
INSERT INTO transactions (id, amount) VALUES (3, 50000);
SELECT * FROM transactions ORDER BY id;
-- 1|100|pending
-- 2|750|review
-- 3|50000|pending
CREATE TABLE events (id INTEGER PRIMARY KEY, name TEXT);
CREATE TRIGGER IF NOT EXISTS log_event BEFORE INSERT ON events
BEGIN
SELECT 1;
END;
-- This does not produce an error because of IF NOT EXISTS
CREATE TRIGGER IF NOT EXISTS log_event BEFORE INSERT ON events
BEGIN
SELECT 1;
END;
SELECT name FROM sqlite_schema WHERE type = 'trigger' AND name = 'log_event';
-- log_event
--experimental-triggers flag.INSTEAD OF triggers (used with views) are not yet supported.TEMPORARY triggers are not yet supported.RAISE() function is not supported within trigger bodies.ALTER TABLE table-name RENAME TO new-table-name
ALTER TABLE table-name RENAME [COLUMN] column-name TO new-column-name
ALTER TABLE table-name ADD [COLUMN] column-def
ALTER TABLE table-name DROP [COLUMN] column-name
ALTER TABLE table-name ALTER COLUMN column-name TO column-def
The ALTER TABLE statement modifies the schema of an existing table. Turso supports five operations: renaming a table, renaming a column, adding a column, dropping a column, and altering a column definition. Unlike CREATE TABLE, ALTER TABLE works on tables that already exist and may already contain data.
Each operation modifies the table’s schema entry in sqlite_schema. Renaming and adding columns are fast operations whose execution time is independent of the number of rows. Dropping a column rewrites the table data and is proportional to table size.
System tables (those with names beginning with sqlite_) cannot be altered.
Renames the table from table-name to new-table-name. The new name must not collide with any existing table, view, or index name. The table cannot be moved between attached databases – it is only renamed within its current database.
References to the table in triggers, views, and foreign key constraints are automatically updated to use the new name, including self-referencing foreign keys.
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT);
ALTER TABLE employees RENAME TO staff;
SELECT name FROM sqlite_schema WHERE type = 'table';
-- staff
Renames an existing column from column-name to new-column-name. The COLUMN keyword is optional. All references to the column in indexes, triggers, views, and foreign key constraints (both child and parent sides) are automatically updated.
If the rename would introduce a semantic ambiguity in a trigger or view, the operation fails and no changes are applied.
CREATE TABLE products (id INTEGER PRIMARY KEY, product_name TEXT NOT NULL, price REAL);
ALTER TABLE products RENAME COLUMN product_name TO name;
SELECT sql FROM sqlite_schema WHERE name = 'products';
-- CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT NOT NULL, price REAL)
Renaming a column that participates in a foreign key constraint updates both sides of the relationship:
PRAGMA foreign_keys = ON;
CREATE TABLE orders (order_id INTEGER PRIMARY KEY, date TEXT);
CREATE TABLE items (item_id INTEGER PRIMARY KEY, oid INTEGER,
FOREIGN KEY (oid) REFERENCES orders(order_id));
ALTER TABLE orders RENAME COLUMN order_id TO ord_id;
SELECT sql FROM sqlite_schema WHERE name = 'items';
-- CREATE TABLE items (item_id INTEGER PRIMARY KEY, oid INTEGER, FOREIGN KEY (oid) REFERENCES orders (ord_id))
Appends a new column to the end of the table’s column list. The COLUMN keyword is optional. The column-def follows the same syntax as a column definition in CREATE TABLE, including an optional type name, DEFAULT, NOT NULL, CHECK, COLLATE, and REFERENCES clauses.
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER);
ALTER TABLE orders ADD COLUMN total_amount REAL DEFAULT 0.0;
ALTER TABLE orders ADD COLUMN status TEXT DEFAULT 'pending';
SELECT sql FROM sqlite_schema WHERE name = 'orders';
-- CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER, total_amount REAL DEFAULT 0.0, status TEXT DEFAULT 'pending')
The new column may not have:
| Restriction | Reason |
|---|---|
PRIMARY KEY or UNIQUE constraint | Would require rewriting existing data and indexes. |
| A non-constant default expression | The default must be a literal, a signed literal, or a parenthesized constant. Expressions like (NULL + 5) are rejected. |
NOT NULL without a non-null default (when the table has rows) | Existing rows would have NULL for the new column, violating the constraint. |
CURRENT_TIME, CURRENT_DATE, or CURRENT_TIMESTAMP as default (when the table has rows) | These are non-deterministic and cannot be used to backfill existing rows. |
GENERATED ALWAYS ... STORED or AS (expr) | Adding generated columns via ALTER TABLE is not supported. |
A NOT NULL column without a default value is permitted if the table is empty:
CREATE TABLE contacts (id INTEGER PRIMARY KEY);
ALTER TABLE contacts ADD name TEXT NOT NULL;
SELECT sql FROM sqlite_schema WHERE type = 'table' AND name = 'contacts';
-- CREATE TABLE contacts (id INTEGER PRIMARY KEY, name TEXT NOT NULL)
A NOT NULL column with a non-null default succeeds even on tables with existing rows:
CREATE TABLE tasks (id INTEGER PRIMARY KEY);
INSERT INTO tasks VALUES (1);
ALTER TABLE tasks ADD priority INTEGER NOT NULL DEFAULT 5;
INSERT INTO tasks (id) VALUES (2);
SELECT * FROM tasks ORDER BY id;
-- 1|5
-- 2|5
Adding a column with a foreign key reference is supported and updates the schema to include a table-level FOREIGN KEY clause:
CREATE TABLE departments (id INTEGER PRIMARY KEY);
CREATE TABLE staff (id INTEGER PRIMARY KEY);
ALTER TABLE staff ADD COLUMN dept_id REFERENCES departments(id);
SELECT sql FROM sqlite_schema WHERE name = 'staff';
-- CREATE TABLE staff (id INTEGER PRIMARY KEY, dept_id, FOREIGN KEY (dept_id) REFERENCES departments(id))
Duplicate column names are rejected (case-insensitive):
CREATE TABLE items (name TEXT);
ALTER TABLE items ADD COLUMN name TEXT;
-- Error: duplicate column name
Removes a column from the table and rewrites the table data to exclude the dropped column’s values. The COLUMN keyword is optional.
CREATE TABLE customers (id INTEGER PRIMARY KEY, name TEXT, email TEXT, phone TEXT);
INSERT INTO customers VALUES (1, 'Alice', '[email protected]', '555-0100');
ALTER TABLE customers DROP COLUMN phone;
SELECT * FROM customers;
-- 1|Alice|[email protected]
A column cannot be dropped if it:
| Restriction | Reason |
|---|---|
Is a PRIMARY KEY or part of one | The primary key is essential to the table’s identity. |
Has a UNIQUE constraint | A unique index depends on the column. |
| Is referenced by an index | The index would become invalid. |
Is referenced by a table-level CHECK constraint | The CHECK expression would reference a missing column. |
| Is used in a generated column expression | The generated column would become invalid. |
A column-level CHECK constraint attached to the dropped column is removed along with the column:
CREATE TABLE metrics (
id INTEGER PRIMARY KEY,
value REAL CHECK (value BETWEEN 0.0 AND 1000.0),
label TEXT NOT NULL
);
INSERT INTO metrics VALUES (1, 500.0, 'temperature');
ALTER TABLE metrics DROP COLUMN value;
INSERT INTO metrics VALUES (2, 'humidity');
SELECT * FROM metrics;
-- 1|temperature
-- 2|humidity
This is a Turso extension not present in SQLite.
The ALTER COLUMN operation changes a column’s name and definition in a single statement. The column-def after TO is a full column definition (name, type, and constraints), replacing the old column definition entirely. Data in existing rows is preserved.
CREATE TABLE sensors (id INTEGER PRIMARY KEY, reading INTEGER);
CREATE INDEX idx_reading ON sensors (reading);
ALTER TABLE sensors ALTER COLUMN reading TO measurement BLOB;
SELECT sql FROM sqlite_schema;
-- CREATE TABLE sensors (id INTEGER PRIMARY KEY, measurement BLOB)
-- CREATE INDEX idx_reading ON sensors (measurement)
The new column definition may not include PRIMARY KEY or UNIQUE constraints:
CREATE TABLE config (key TEXT, value TEXT);
ALTER TABLE config ALTER COLUMN value TO value PRIMARY KEY;
-- Error: cannot add PRIMARY KEY via ALTER COLUMN
The new definition may specify a generated column expression, as long as at least one non-generated column remains in the table:
CREATE TABLE items (name TEXT, price REAL);
ALTER TABLE items ALTER COLUMN price TO computed_price AS (123);
SELECT sql FROM sqlite_schema WHERE name = 'items';
-- CREATE TABLE items (name TEXT, computed_price AS (123))
A common workflow is creating a table and then adding columns as requirements change:
CREATE TABLE users (id INTEGER PRIMARY KEY, username TEXT NOT NULL);
INSERT INTO users VALUES (1, 'alice'), (2, 'bob');
ALTER TABLE users ADD email TEXT DEFAULT 'unknown';
ALTER TABLE users ADD created_at TEXT DEFAULT CURRENT_TIMESTAMP;
SELECT id, username, email FROM users ORDER BY id;
-- 1|alice|unknown
-- 2|bob|unknown
CREATE TABLE t (a INTEGER, b TEXT, c REAL);
CREATE INDEX idx_t_a ON t (a);
ALTER TABLE t RENAME COLUMN a TO user_id;
ALTER TABLE t RENAME COLUMN b TO user_name;
ALTER TABLE t RENAME COLUMN c TO balance;
ALTER TABLE t RENAME TO accounts;
SELECT sql FROM sqlite_schema ORDER BY type;
-- CREATE INDEX idx_t_a ON accounts (user_id)
-- CREATE TABLE accounts (user_id INTEGER, user_name TEXT, balance REAL)
CREATE TABLE logs (id INTEGER PRIMARY KEY, message TEXT, debug_info TEXT);
INSERT INTO logs VALUES (1, 'Server started', 'verbose debug data');
INSERT INTO logs VALUES (2, 'User login', 'session details');
ALTER TABLE logs DROP COLUMN debug_info;
SELECT * FROM logs;
-- 1|Server started
-- 2|User login
When a parent table is renamed, foreign key references in child tables are updated automatically:
PRAGMA foreign_keys = ON;
CREATE TABLE categories (id INTEGER PRIMARY KEY);
CREATE TABLE products (id INTEGER PRIMARY KEY, cat_id INTEGER,
FOREIGN KEY (cat_id) REFERENCES categories(id));
ALTER TABLE categories RENAME TO product_categories;
SELECT sql FROM sqlite_schema WHERE name = 'products';
-- CREATE TABLE products (id INTEGER PRIMARY KEY, cat_id INTEGER, FOREIGN KEY (cat_id) REFERENCES product_categories (id))
ALTER TABLE ADD COLUMN is not supported. Use ALTER COLUMN to convert an existing column to a generated column, or recreate the table.DROP TABLE [IF EXISTS] table-name
DROP INDEX [IF EXISTS] index-name
DROP VIEW [IF EXISTS] view-name
DROP TRIGGER [IF EXISTS] trigger-name
The DROP statement removes a database object (table, index, view, or trigger) from the database. The object and all its associated data are permanently deleted. This action cannot be undone.
If the IF EXISTS clause is included, the statement is a no-op when the named object does not exist. Without IF EXISTS, attempting to drop a nonexistent object raises an error.
Removes a table and all of its data, indexes, and triggers from the database.
-- Remove a table
DROP TABLE users;
-- Remove a table only if it exists (no error if missing)
DROP TABLE IF EXISTS users;
When a table is dropped:
Removes an index from the database. The underlying table data is not affected.
-- Remove an index
DROP INDEX idx_users_email;
-- Remove an index only if it exists
DROP INDEX IF EXISTS idx_users_email;
Dropping an index does not affect the data in the table — it only removes the index structure. Queries that previously used the index will still work, but may run slower without it.
Removes a view definition from the database. Since views do not store data, no data is deleted.
-- Remove a view
DROP VIEW active_users;
-- Remove a view only if it exists
DROP VIEW IF EXISTS active_users;
Removes a trigger from the database.
-- Remove a trigger
DROP TRIGGER audit_insert;
-- Remove a trigger only if it exists
DROP TRIGGER IF EXISTS audit_insert;
-- Create a table with an index, then drop both
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL
);
CREATE INDEX idx_products_name ON products(name);
-- Drop the index first, then the table
DROP INDEX idx_products_name;
DROP TABLE products;
-- Safely clean up objects that may or may not exist
DROP VIEW IF EXISTS sales_summary;
DROP TABLE IF EXISTS orders;
DROP TABLE IF EXISTS customers;
-- Drop and recreate a table (reset)
DROP TABLE IF EXISTS temp_results;
CREATE TABLE temp_results (id INTEGER PRIMARY KEY, value TEXT);
| Feature | Status |
|---|---|
| DROP TABLE | Supported |
| DROP INDEX | Supported |
| DROP VIEW | Supported |
| DROP TRIGGER | Requires --experimental-triggers flag |
| IF EXISTS | Supported |
BEGIN [DEFERRED | IMMEDIATE | EXCLUSIVE] [TRANSACTION]
COMMIT [TRANSACTION]
END [TRANSACTION]
ROLLBACK [TRANSACTION]
A transaction is a sequence of SQL statements that are executed as a single atomic unit. Either all statements in the transaction complete successfully and their changes are made permanent, or none of them take effect. Transactions guarantee that the database moves from one consistent state to another, even in the presence of errors or unexpected termination.
Turso supports three commands for explicit transaction control: BEGIN starts a new transaction, COMMIT (or its alias END) makes all changes within the transaction permanent, and ROLLBACK discards all changes made since the transaction began.
Transactions in Turso do not nest. Issuing BEGIN while a transaction is already active will produce an error. For the same reason, issuing COMMIT or ROLLBACK outside of a transaction will also produce an error.
When no explicit transaction is active, Turso operates in autocommit mode. In this mode every individual SQL statement that reads from or writes to the database is automatically wrapped in its own implicit transaction. The implicit transaction is committed as soon as the statement finishes executing.
This means a single INSERT, UPDATE, or DELETE statement executed outside of an explicit transaction is atomic by itself – it either fully succeeds or has no effect. However, if you need multiple statements to succeed or fail together, you must wrap them in an explicit BEGIN … COMMIT block.
-- Autocommit mode: each statement is its own transaction
CREATE TABLE orders (id INTEGER PRIMARY KEY, product TEXT, qty INTEGER);
INSERT INTO orders VALUES (1, 'Widget', 10);
INSERT INTO orders VALUES (2, 'Gadget', 5);
-- Both rows are committed independently
The BEGIN statement accepts an optional keyword that controls when the transaction acquires its lock on the database.
| Type | Behavior |
|---|---|
| DEFERRED | The default. The transaction does not acquire any lock until the database is first accessed. A read statement starts a read transaction; a write statement starts a write transaction. |
| IMMEDIATE | A write lock is acquired immediately when BEGIN IMMEDIATE is executed, without waiting for the first write statement. This guarantees that no other connection can write to the database while this transaction is open. |
| EXCLUSIVE | Behaves the same as IMMEDIATE under WAL mode, which is the journaling mode used by Turso. A write lock is acquired immediately. |
When the transaction type is omitted, DEFERRED is assumed.
A deferred transaction does not acquire any database lock when BEGIN is executed. The lock is acquired lazily the first time the database is actually read from or written to within the transaction. If the first operation is a SELECT, a read lock is acquired. If the first operation is an INSERT, UPDATE, DELETE, or other write statement, a write lock is acquired.
-- DEFERRED is the default; these two are equivalent
BEGIN TRANSACTION;
-- ...
COMMIT;
BEGIN DEFERRED TRANSACTION;
-- ...
COMMIT;
An immediate transaction acquires a write lock as soon as BEGIN IMMEDIATE is executed. This is useful when you know the transaction will perform writes, because it avoids a potential conflict that could occur if a deferred transaction tries to upgrade from a read lock to a write lock after another connection has already started writing.
BEGIN IMMEDIATE TRANSACTION;
-- Write lock is held from this point
INSERT INTO orders (product, qty) VALUES ('Keyboard', 75);
COMMIT;
Under WAL mode – the mode Turso uses – EXCLUSIVE behaves identically to IMMEDIATE. Both acquire a write lock immediately. In other journaling modes (not used by Turso), EXCLUSIVE would additionally prevent other connections from reading the database, but this distinction does not apply here.
BEGIN EXCLUSIVE;
INSERT INTO orders (product, qty) VALUES ('Mouse', 200);
END;
COMMIT makes all changes performed within the current transaction permanent. END is an alias for COMMIT; they are interchangeable.
The optional TRANSACTION keyword after COMMIT or END is purely decorative and has no effect on behavior.
-- These are all equivalent
COMMIT;
COMMIT TRANSACTION;
END;
END TRANSACTION;
ROLLBACK discards all changes made within the current transaction. The database is restored to the state it was in before BEGIN was executed.
The optional TRANSACTION keyword after ROLLBACK is purely decorative and has no effect on behavior.
CREATE TABLE accounts (id INTEGER PRIMARY KEY, name TEXT, balance REAL);
INSERT INTO accounts VALUES (1, 'Alice', 1000.00);
INSERT INTO accounts VALUES (2, 'Bob', 500.00);
BEGIN;
UPDATE accounts SET balance = balance - 200.00 WHERE name = 'Alice';
UPDATE accounts SET balance = balance + 200.00 WHERE name = 'Bob';
-- At this point, Alice has 800 and Bob has 700 within the transaction
ROLLBACK;
SELECT * FROM accounts;
-- Alice still has 1000.00 and Bob still has 500.00
Wrapping related updates in a transaction ensures that a transfer between accounts either fully completes or has no effect.
CREATE TABLE accounts (id INTEGER PRIMARY KEY, name TEXT, balance REAL);
INSERT INTO accounts VALUES (1, 'Alice', 1000.00);
INSERT INTO accounts VALUES (2, 'Bob', 500.00);
BEGIN TRANSACTION;
UPDATE accounts SET balance = balance - 200.00 WHERE name = 'Alice';
UPDATE accounts SET balance = balance + 200.00 WHERE name = 'Bob';
COMMIT;
SELECT * FROM accounts;
-- 1|Alice|800.0
-- 2|Bob|700.0
Grouping multiple inserts into a single transaction is significantly faster than executing each insert in autocommit mode, because the database only needs to sync to disk once at COMMIT rather than after every individual statement.
CREATE TABLE inventory (id INTEGER PRIMARY KEY, product TEXT NOT NULL, qty INTEGER DEFAULT 0);
BEGIN;
INSERT INTO inventory (product, qty) VALUES ('Laptop', 50);
INSERT INTO inventory (product, qty) VALUES ('Mouse', 200);
INSERT INTO inventory (product, qty) VALUES ('Keyboard', 75);
COMMIT;
SELECT * FROM inventory;
-- 1|Laptop|50
-- 2|Mouse|200
-- 3|Keyboard|75
If something goes wrong during a sequence of operations, ROLLBACK ensures that partially applied changes do not corrupt the database.
CREATE TABLE orders (id INTEGER PRIMARY KEY, product TEXT, qty INTEGER);
BEGIN;
INSERT INTO orders VALUES (1, 'Widget', 10);
INSERT INTO orders VALUES (2, 'Gadget', 5);
-- Decide to discard these changes
ROLLBACK;
SELECT count(*) FROM orders;
-- 0
When you know a transaction will write to the database, starting with BEGIN IMMEDIATE avoids the overhead and potential failure of upgrading a read lock to a write lock mid-transaction.
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
BEGIN IMMEDIATE;
INSERT INTO products VALUES (1, 'Widget', 9.99);
INSERT INTO products VALUES (2, 'Gadget', 24.95);
END;
SELECT * FROM products;
-- 1|Widget|9.99
-- 2|Gadget|24.95
END and COMMIT are interchangeable. Use whichever reads more naturally in your application.
CREATE TABLE events (id INTEGER PRIMARY KEY, description TEXT);
BEGIN DEFERRED TRANSACTION;
INSERT INTO events (description) VALUES ('user_login');
INSERT INTO events (description) VALUES ('page_view');
END TRANSACTION;
SELECT * FROM events;
-- 1|user_login
-- 2|page_view
Turso supports BEGIN, COMMIT / END, and ROLLBACK with the same syntax and semantics as SQLite. The TRANSACTION keyword is optional in all three commands, matching SQLite behavior.
SAVEPOINT and RELEASE SAVEPOINT are not supported. Nested transactions using savepoints are not available.
Turso operates exclusively in WAL (Write-Ahead Logging) mode. As a result, BEGIN EXCLUSIVE and BEGIN IMMEDIATE behave identically – both acquire a write lock immediately. The distinction between these two modes that exists in other SQLite journaling modes does not apply.
EXPLAIN sql-statement
EXPLAIN QUERY PLAN sql-statement
The EXPLAIN statement is a diagnostic tool for understanding how Turso executes a SQL statement. It does not run the statement itself. Instead, it returns metadata about the execution strategy.
There are two forms. EXPLAIN returns the full sequence of virtual machine (VDBE) bytecode instructions that Turso would execute for the given statement. EXPLAIN QUERY PLAN returns a high-level summary of the query plan, showing which tables and indexes are accessed and in what order. Both forms are intended for interactive analysis, debugging, and performance tuning.
The EXPLAIN prefix can be applied to any SQL statement, including SELECT, INSERT, UPDATE, DELETE, CREATE TABLE, and others. The prefixed statement is compiled but never executed, so it has no side effects.
EXPLAIN returns one row per bytecode instruction. Each row has eight columns:
| Column | Type | Description |
|---|---|---|
addr | INTEGER | The instruction address (sequential, starting at 0). |
opcode | TEXT | The name of the VDBE opcode (e.g., Init, OpenRead, Column, ResultRow). |
p1 | INTEGER | First operand. Meaning varies by opcode. |
p2 | INTEGER | Second operand. Often a jump target or register number. |
p3 | INTEGER | Third operand. |
p4 | TEXT | Fourth operand. May contain a string constant, function name, key format, or table/index name. |
p5 | INTEGER | Fifth operand. Typically contains flags. |
comment | TEXT | A human-readable comment describing what the instruction does. |
The bytecode format is an internal implementation detail and may change between Turso releases. Do not write application logic that depends on specific opcodes or instruction sequences.
EXPLAIN QUERY PLAN returns one row per step in the query plan. Each row has four columns:
| Column | Type | Description |
|---|---|---|
id | INTEGER | A unique identifier for this node in the plan tree. |
parent | INTEGER | The id of the parent node (0 for root nodes). |
notused | INTEGER | Reserved. Always 0. |
detail | TEXT | A human-readable description of the operation at this step. |
The detail column contains the most useful information. Common values include:
| Detail prefix | Meaning |
|---|---|
SCAN table-name | A full table scan with no index. |
SEARCH table-name USING INDEX index-name | An indexed lookup. |
SEARCH table-name USING INTEGER PRIMARY KEY (rowid=?) | A direct rowid lookup via the primary key. |
USE TEMP B-TREE FOR ORDER BY | A temporary structure is used to sort results. |
SCAN CONSTANT ROW | A row is produced from constant values (no table access). |
When a query involves multiple tables (joins, subqueries), EXPLAIN QUERY PLAN returns multiple rows showing the access order and method for each table.
The output of EXPLAIN QUERY PLAN is the primary tool for diagnosing slow queries. The key things to look for:
SCAN reads every row in the table. A SEARCH uses an index to jump directly to matching rows. If a query is slow, look for unexpected SCAN operations on large tables and consider adding an index.USE TEMP B-TREE indicate that Turso must build a temporary data structure (for sorting, grouping, or deduplication). This is normal for ORDER BY on non-indexed columns but can be a performance concern for large result sets.EXPLAIN SELECT 1;
-- addr opcode p1 p2 p3 p4 p5 comment
-- ---- ----------------- ---- ---- ---- ------------- -- -------
-- 0 Init 0 3 0 0 Start at 3
-- 1 ResultRow 1 1 0 0 output=r[1]
-- 2 Halt 0 0 0 0
-- 3 Integer 1 1 0 0 r[1]=1
-- 4 Goto 0 1 0 0
The Init instruction jumps to address 3, where the integer 1 is loaded into register 1. Control then jumps to address 1, which outputs register 1 as a result row. Finally, Halt terminates execution.
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer_id INTEGER,
product TEXT,
amount REAL
);
EXPLAIN QUERY PLAN SELECT * FROM orders;
-- QUERY PLAN
-- `--SCAN orders
With no WHERE clause and no index needed, Turso performs a full scan of the orders table.
CREATE INDEX idx_orders_customer ON orders(customer_id);
EXPLAIN QUERY PLAN SELECT id, product, amount
FROM orders WHERE customer_id = 42;
-- QUERY PLAN
-- `--SEARCH orders USING INDEX idx_orders_customer
The SEARCH line shows that Turso uses the idx_orders_customer index to find rows where customer_id = 42, avoiding a full table scan.
EXPLAIN QUERY PLAN SELECT * FROM orders WHERE id = 10;
-- QUERY PLAN
-- `--SEARCH orders USING INTEGER PRIMARY KEY (rowid=?)
When filtering by the INTEGER PRIMARY KEY, Turso performs a direct rowid lookup, which is the fastest possible access method.
CREATE TABLE customers (
id INTEGER PRIMARY KEY,
name TEXT,
email TEXT
);
EXPLAIN QUERY PLAN SELECT c.name, o.product, o.amount
FROM customers c
JOIN orders o ON c.id = o.customer_id
WHERE c.name = 'Alice';
-- QUERY PLAN
-- |--SCAN customers AS c
-- `--SEARCH o USING INDEX idx_orders_customer
Turso scans the customers table (the outer loop), and for each matching customer, uses the index on orders(customer_id) to find their orders (the inner loop).
EXPLAIN QUERY PLAN SELECT product, SUM(amount)
FROM orders
GROUP BY product
ORDER BY SUM(amount) DESC;
-- QUERY PLAN
-- |--SCAN orders
-- `--USE TEMP B-TREE FOR ORDER BY
The plan shows a full table scan to read and group the data, followed by a temporary B-tree to sort the grouped results by the aggregate value.
EXPLAIN INSERT INTO orders VALUES (1, 42, 'Widget', 9.99);
-- addr opcode p1 p2 p3 p4 p5 comment
-- ---- ----------------- ---- ---- ---- ------------- -- -------
-- 0 Init 0 20 0 0 Start at 20
-- 1 OpenWrite 0 2 0 0 root=2; iDb=0
-- 2 Integer 1 2 0 0 r[2]=1
-- 3 SoftNull 3 0 0 0
-- 4 Integer 42 4 0 0 r[4]=42
-- 5 String8 0 5 0 Widget 0 r[5]='Widget'
-- 6 Real 0 6 0 9.99 0 r[6]=9.99
-- 7 NotNull 2 9 0 0 r[2]!=NULL -> goto 9
-- 8 Goto 0 11 0 0
-- 9 MustBeInt 2 0 0 0
-- 10 Goto 0 12 0 0
-- 11 NewRowid 0 2 0 0 r[2]=rowid
-- 12 Affinity 3 4 0 0 r[3..7] = D, D, B, E
-- 13 NotExists 0 15 2 0
-- 14 Halt 1555 0 0 orders.id 0
-- 15 MakeRecord 3 4 7 0 r[7]=mkrec(r[3..6])
-- 16 Insert 0 7 2 orders 0 intkey=r[2] data=r[7]
-- 17 Goto 0 18 0 0
-- 18 Goto 0 19 0 0
-- 19 Halt 0 0 0 0
-- 20 Transaction 0 2 1 0 iDb=0 tx_mode=Write
-- 21 Goto 0 1 0 0
The bytecode shows the full instruction sequence: opening the table for writing, loading the values into registers, checking for primary key conflicts, assembling the record, and inserting it.
Operators perform arithmetic, comparison, logical, bitwise, and string operations on values in SQL expressions. Turso supports the full set of SQLite operators, including the standard SQL IS DISTINCT FROM and IS NOT DISTINCT FROM forms.
When an expression contains multiple operators, precedence determines the order of evaluation. Operators with higher precedence bind more tightly. Operators at the same precedence level are left-associative (evaluated left to right). Parentheses override precedence.
| Precedence | Operators | Category |
|---|---|---|
| 1 (highest) | ~ + - | Unary bitwise NOT, unary plus, unary minus |
| 2 | ` | |
| 3 | * / % | Multiplication, division, modulo |
| 4 | + - | Addition, subtraction |
| 5 | & ` | << >>` |
| 6 | < > <= >= | Relational comparison |
| 7 | = == <> IS IS NOT IS DISTINCT FROM IS NOT DISTINCT FROM | Equality and identity |
| 8 | NOT | Logical NOT (unary) |
| 9 | AND | Logical AND |
| 10 (lowest) | OR | Logical OR |
-- Multiplication binds tighter than addition
SELECT 2 + 3 * 4; -- 14 (not 20)
SELECT (2 + 3) * 4; -- 20 (parentheses override)
-- AND binds tighter than OR
SELECT 1 OR 0 AND 0; -- 1 (equivalent to: 1 OR (0 AND 0))
-- NOT binds tighter than AND
SELECT NOT 1 AND 0; -- 0 (equivalent to: (NOT 1) AND 0)
-- Left associativity
SELECT 10 - 2 - 3; -- 5 (equivalent to: (10 - 2) - 3)
Unary operators take a single operand. They have the highest precedence of all operators.
| Operator | Description |
|---|---|
-expr | Negates the numeric value of expr. |
+expr | No-op. Returns the value of expr unchanged. |
~expr | Bitwise NOT. Inverts every bit of the integer value of expr. |
NOT expr | Logical NOT. Returns 1 if expr is 0, returns 0 if expr is non-zero, returns NULL if expr is NULL. |
SELECT -5; -- -5
SELECT +5; -- 5
SELECT ~5; -- -6 (inverts all bits of integer 5)
SELECT NOT 1; -- 0
SELECT NOT 0; -- 1
SELECT NOT NULL; -- NULL
Arithmetic operators perform numeric calculations. When both operands are integers, the result is an integer. When either operand is a real (floating-point) value, the result is a real.
| Operator | Description |
|---|---|
expr1 + expr2 | Addition. |
expr1 - expr2 | Subtraction. |
expr1 * expr2 | Multiplication. |
expr1 / expr2 | Division. Integer division truncates toward zero. |
expr1 % expr2 | Modulo. Returns the remainder after integer division. Both operands are cast to integers. |
SELECT 5 + 3; -- 8
SELECT 10 - 4; -- 6
SELECT 6 * 7; -- 42
SELECT 15 / 4; -- 3 (integer division truncates)
SELECT 7.0 / 2; -- 3.5 (real division when either operand is real)
SELECT 2.5 + 1.5; -- 4.0
SELECT 17 % 5; -- 2
SELECT -7 % 3; -- -1
Any arithmetic operation involving NULL produces NULL:
SELECT 5 + NULL; -- NULL
CREATE TABLE products (name TEXT, price REAL, quantity INTEGER);
INSERT INTO products VALUES ('Widget', 9.99, 100),
('Gadget', 24.50, 50),
('Gizmo', 4.75, 200);
SELECT name, price * quantity AS total_value FROM products;
-- Widget|999.0
-- Gadget|1225.0
-- Gizmo|950.0
The || operator joins two values into a single text string. Non-text operands are converted to text before concatenation. If either operand is NULL, the result is NULL.
SELECT 'Hello' || ' ' || 'World'; -- Hello World
SELECT 'Order #' || 42; -- Order #42
SELECT 'text' || NULL; -- NULL
CREATE TABLE users (first_name TEXT, last_name TEXT);
INSERT INTO users VALUES ('Alice', 'Smith'), ('Bob', 'Jones');
SELECT first_name || ' ' || last_name AS full_name FROM users;
-- Alice Smith
-- Bob Jones
Comparison operators compare two values and return an integer: 1 for true, 0 for false. When either operand is NULL, standard comparison operators return NULL (with exceptions noted below).
| Operator | Description |
|---|---|
expr1 = expr2 | True if operands are equal. |
expr1 == expr2 | Synonym for =. |
expr1 <> expr2 | True if operands are not equal. |
SELECT 1 = 1; -- 1
SELECT 1 == 1; -- 1
SELECT 1 <> 2; -- 1
| Operator | Description |
|---|---|
expr1 < expr2 | True if expr1 is less than expr2. |
expr1 > expr2 | True if expr1 is greater than expr2. |
expr1 <= expr2 | True if expr1 is less than or equal to expr2. |
expr1 >= expr2 | True if expr1 is greater than or equal to expr2. |
SELECT 3 < 5; -- 1
SELECT 5 > 3; -- 1
SELECT 3 <= 3; -- 1
SELECT 3 >= 3; -- 1
Standard comparison operators return NULL when either operand is NULL, because NULL represents an unknown value:
SELECT NULL = NULL; -- NULL (not 1!)
SELECT NULL <> NULL; -- NULL (not 1!)
This is why WHERE column = NULL never matches any rows. Use IS NULL instead.
The IS and IS NOT operators work like = and <> but handle NULL deterministically: they never return NULL.
| Left | Right | IS | IS NOT |
|---|---|---|---|
| NULL | NULL | 1 | 0 |
| NULL | non-NULL | 0 | 1 |
| non-NULL | NULL | 0 | 1 |
| value | value | same as = | same as <> |
SELECT NULL IS NULL; -- 1
SELECT 5 IS NOT NULL; -- 1
SELECT 5 IS NULL; -- 0
SELECT NULL IS NOT NULL; -- 0
These are standard SQL aliases for IS NOT and IS, respectively. They are useful for writing portable SQL that avoids the compact SQLite-specific IS/IS NOT syntax.
| Expression | Equivalent To |
|---|---|
a IS DISTINCT FROM b | a IS NOT b |
a IS NOT DISTINCT FROM b | a IS b |
SELECT NULL IS NOT DISTINCT FROM NULL; -- 1 (equivalent to NULL IS NULL)
SELECT 5 IS DISTINCT FROM NULL; -- 1 (equivalent to 5 IS NOT NULL)
SELECT 5 IS DISTINCT FROM 5; -- 0 (equivalent to 5 IS NOT 5)
SELECT NULL IS NOT DISTINCT FROM 5; -- 0 (equivalent to NULL IS 5)
Logical operators evaluate boolean expressions. Turso uses three-valued logic: true (1), false (0), and unknown (NULL).
Returns 1 if both operands are true, 0 if either operand is false, and NULL otherwise.
| Left | Right | Result |
|---|---|---|
| 1 | 1 | 1 |
| 1 | 0 | 0 |
| 0 | 0 | 0 |
| NULL | 1 | NULL |
| NULL | 0 | 0 |
The key behavior: NULL AND 0 is 0 (not NULL), because regardless of the unknown value, the result must be false when the other operand is false.
SELECT 1 AND 1; -- 1
SELECT 1 AND 0; -- 0
SELECT NULL AND 0; -- 0
SELECT NULL AND 1; -- NULL
Returns 1 if either operand is true, 0 if both operands are false, and NULL otherwise.
| Left | Right | Result |
|---|---|---|
| 1 | 0 | 1 |
| 0 | 0 | 0 |
| 1 | 1 | 1 |
| NULL | 1 | 1 |
| NULL | 0 | NULL |
The key behavior: NULL OR 1 is 1 (not NULL), because regardless of the unknown value, the result must be true when the other operand is true.
SELECT 1 OR 0; -- 1
SELECT 0 OR 0; -- 0
SELECT NULL OR 1; -- 1
SELECT NULL OR 0; -- NULL
Returns 0 if the operand is true, 1 if the operand is false, and NULL if the operand is NULL.
SELECT NOT 1; -- 0
SELECT NOT 0; -- 1
SELECT NOT NULL; -- NULL
CREATE TABLE orders (id INTEGER, amount REAL, status TEXT);
INSERT INTO orders VALUES (1, 150.00, 'shipped'),
(2, 75.50, 'pending'),
(3, 200.00, 'shipped'),
(4, 30.00, 'cancelled');
SELECT * FROM orders WHERE amount > 100 AND status = 'shipped';
-- 1|150.0|shipped
-- 3|200.0|shipped
Bitwise operators work on the integer representation of their operands. Non-integer operands are cast to integers before the operation.
| Operator | Description |
|---|---|
expr1 & expr2 | Bitwise AND. Sets each bit to 1 only if both corresponding bits are 1. |
| `expr1 | expr2` |
~expr | Bitwise NOT (unary). Inverts every bit. |
expr1 << expr2 | Left shift. Shifts bits of expr1 left by expr2 positions, filling with zeros. |
expr1 >> expr2 | Right shift. Shifts bits of expr1 right by expr2 positions. |
SELECT 5 & 3; -- 1 (0101 & 0011 = 0001)
SELECT 5 | 3; -- 7 (0101 | 0011 = 0111)
SELECT ~5; -- -6 (inverts all 64 bits)
SELECT 1 << 4; -- 16 (shift 1 left by 4 positions)
SELECT 16 >> 2; -- 4 (shift 16 right by 2 positions)
SELECT 0xFF & 0x0F; -- 15 (mask lower nibble)
SELECT 0xF0 | 0x0F; -- 255 (combine nibbles)
Most operators propagate NULL: if any operand is NULL, the result is NULL. The exceptions are:
| Operator | NULL Behavior |
|---|---|
IS / IS NOT | Never returns NULL. Treats NULL as a comparable value. |
IS DISTINCT FROM / IS NOT DISTINCT FROM | Never returns NULL. Same as IS NOT / IS. |
AND | Returns 0 if the other operand is 0, even when one operand is NULL. |
OR | Returns 1 if the other operand is 1, even when one operand is NULL. |
| ` |
The !< (not less than) and !> (not greater than) operators recognized by some databases are not supported. Use >= and <= instead.
A literal (also called a constant) is a fixed value written directly in a SQL statement. Turso supports six kinds of literal values: integers, reals, strings, blobs, NULL, and booleans.
An integer literal is a sequence of decimal digits with no decimal point and no exponent. Integer values are stored as 64-bit signed integers, supporting the range -9223372036854775808 to 9223372036854775807.
SELECT 42; -- 42
SELECT -100; -- -100
SELECT 0; -- 0
If a numeric literal without a decimal point or exponent exceeds the 64-bit signed integer range, it is automatically treated as a real (floating-point) value:
SELECT typeof(9223372036854775807); -- integer (fits in 64-bit)
SELECT typeof(9223372036854775808); -- real (exceeds 64-bit range)
Integer literals may also be written in hexadecimal using the 0x or 0X prefix followed by hexadecimal digits (0-9, a-f, A-F). Hexadecimal literals are interpreted as 64-bit two’s-complement integers.
SELECT 0x1F; -- 31
SELECT 0xFF; -- 255
SELECT 0x0; -- 0
SELECT 0X1F; -- 31 (uppercase prefix also works)
Hexadecimal notation is recognized only in SQL literal syntax. Runtime string-to-integer conversions (such as CAST('0xFF' AS INTEGER)) do not interpret hex prefixes.
A numeric literal is treated as a real (floating-point) value if it contains a decimal point, an exponent clause, or both. Real values are stored as 8-byte IEEE 754 floating-point numbers.
SELECT 3.14; -- 3.14
SELECT .5; -- 0.5
SELECT 100.0; -- 100.0
An exponent clause consists of the letter E or e, an optional sign (+ or -), and one or more digits:
SELECT 2.5e3; -- 2500.0
SELECT 1.5E-2; -- 0.015
SELECT 1e10; -- 10000000000.0
For readability, a single underscore (_) may be placed between any two digits in a numeric literal. Underscores are ignored during parsing and do not affect the value. This works for both integer and real literals.
SELECT 1_000_000; -- 1000000
SELECT 1_000.000_001; -- 1000.000001
A string literal is a sequence of characters enclosed in single quotes ('). The value has TEXT storage class.
SELECT 'Hello, world!'; -- Hello, world!
SELECT ''; -- (empty string)
To include a literal single-quote character within a string, write two single quotes in a row (''). This is the standard SQL escaping mechanism. C-style backslash escapes (\') are not supported.
SELECT 'It''s a test'; -- It's a test
SELECT 'She said ''hi'''; -- She said 'hi'
A blob literal is a string of hexadecimal digits preceded by X or x and enclosed in single quotes. Each pair of hex digits represents one byte. The number of hex digits must be even. The value has BLOB storage class.
SELECT X'48656C6C6F'; -- blob containing bytes for "Hello"
SELECT typeof(X'48656C6C6F'); -- blob
SELECT length(X'48656C6C6F'); -- 5
SELECT x'48656C6C6F'; -- lowercase x prefix also works
SELECT X'FF'; -- single byte: 0xFF
A blob literal with no hex digits (X'') produces a zero-length blob.
Invalid blob literals are rejected at parse time. The hex digits must be valid (0-9, a-f, A-F) and the total count must be even.
The keyword NULL represents a missing or unknown value. NULL is its own storage class and is distinct from any other value, including zero, an empty string, or a zero-length blob.
SELECT NULL; -- (empty result)
SELECT typeof(NULL); -- null
NULL propagates through most operations. An expression involving NULL generally produces NULL:
SELECT NULL + 5; -- NULL
Use IS NULL or IS NOT NULL to test for null values. The = operator does not match NULL because NULL is not equal to anything, including itself.
Turso recognizes TRUE and FALSE as boolean literals. They are aliases for the integer values 1 and 0, respectively.
SELECT TRUE; -- 1
SELECT FALSE; -- 0
SELECT typeof(TRUE); -- integer
SELECT typeof(FALSE); -- integer
Because TRUE and FALSE are integers, they participate in arithmetic like any other integer:
SELECT TRUE + TRUE; -- 2
When TRUE or FALSE appear on the right-hand side of the IS operator, the expression performs a boolean evaluation of the left operand. Any non-zero, non-NULL value IS TRUE, and zero IS FALSE. NULL is neither true nor false.
SELECT 5 IS TRUE; -- 1 (non-zero is true)
SELECT 0 IS FALSE; -- 1 (zero is false)
SELECT NULL IS TRUE; -- 0 (NULL is not true)
SELECT NULL IS FALSE; -- 0 (NULL is not false)
Use the typeof() function to inspect the storage class of any literal:
SELECT typeof(42); -- integer
SELECT typeof(3.14); -- real
SELECT typeof('hello'); -- text
SELECT typeof(X'FF'); -- blob
SELECT typeof(NULL); -- null
SELECT typeof(TRUE); -- integer
| Literal Kind | Syntax | Storage Class |
|---|---|---|
| Integer | 42, -7, 0xFF | INTEGER |
| Real | 3.14, 2.5e3, .5 | REAL |
| String | 'text' | TEXT |
| Blob | X'hex' | BLOB |
| NULL | NULL | NULL |
| Boolean | TRUE, FALSE | INTEGER |
Turso provides several ways to express conditional logic within SQL queries: the CASE expression for general-purpose branching, and the built-in functions IIF, COALESCE, NULLIF, and IFNULL for common patterns.
There are two forms of the CASE expression.
Searched CASE (without a base expression):
CASE
WHEN condition THEN result
[WHEN condition THEN result ...]
[ELSE default]
END
Simple CASE (with a base expression):
CASE expr
WHEN value THEN result
[WHEN value THEN result ...]
[ELSE default]
END
The CASE expression evaluates a series of conditions and returns the result associated with the first condition that is true.
In the searched form , each WHEN clause contains an arbitrary boolean expression. Turso evaluates the WHEN expressions from left to right and returns the THEN result corresponding to the first expression that is true. A WHEN expression is considered true if its result is non-zero and non-NULL.
In the simple form , Turso evaluates the base expression once, then compares it against each WHEN value from left to right using the = operator. The result of the first matching WHEN value is returned. Because NULL = NULL evaluates to NULL (not true), a NULL base expression will never match any WHEN value, and the ELSE branch (or NULL) is returned.
Both forms use short-circuit evaluation : once a matching WHEN is found, the remaining WHEN clauses are not evaluated. If no WHEN clause matches and there is no ELSE, the result is NULL.
-- Searched CASE: classify a number
SELECT CASE
WHEN 1 > 0 THEN 'positive'
ELSE 'non-positive'
END;
-- positive
-- Simple CASE: map a value to a label
SELECT CASE 2
WHEN 1 THEN 'one'
WHEN 2 THEN 'two'
WHEN 3 THEN 'three'
ELSE 'other'
END;
-- two
-- Multiple WHEN clauses; first match wins
SELECT CASE
WHEN 1 = 2 THEN 'first'
WHEN 2 = 3 THEN 'second'
WHEN 3 = 3 THEN 'third'
ELSE 'none'
END;
-- third
-- NULL and 0 are not true in WHEN conditions
SELECT CASE WHEN NULL THEN 'yes' ELSE 'no' END;
-- no
SELECT CASE WHEN 0 THEN 'matched' END;
-- NULL (no ELSE clause, so NULL is returned)
-- NULL base expression never matches
SELECT CASE NULL
WHEN 1 THEN 'one'
WHEN NULL THEN 'null'
ELSE 'else'
END;
-- else
-- Categorize product availability
CREATE TABLE products (name TEXT, price REAL, stock INTEGER);
INSERT INTO products VALUES ('Widget', 25.99, 100);
INSERT INTO products VALUES ('Gadget', 0, 50);
INSERT INTO products VALUES ('Doohickey', 15.50, 0);
SELECT name,
CASE
WHEN stock = 0 THEN 'out of stock'
WHEN stock < 20 THEN 'low stock'
ELSE 'in stock'
END AS availability
FROM products;
-- Widget | in stock
-- Gadget | in stock
-- Doohickey | out of stock
iif(condition, true_value, false_value) -> value
Returns true_value if condition is true (non-zero and non-NULL), otherwise returns false_value. This is a shorthand for CASE WHEN condition THEN true_value ELSE false_value END.
SELECT iif(1, 'true', 'false');
-- true
SELECT iif(10 > 5, 'big', 'small');
-- big
-- Label orders based on status
CREATE TABLE orders (id INTEGER, total REAL, status TEXT);
INSERT INTO orders VALUES (1, 150.00, 'shipped');
INSERT INTO orders VALUES (2, 0, 'pending');
INSERT INTO orders VALUES (3, 75.50, 'delivered');
SELECT id, iif(status = 'shipped', 'in transit', 'other') AS label
FROM orders;
-- 1 | in transit
-- 2 | other
-- 3 | other
coalesce(x, y, …) -> value
Returns the first argument that is not NULL. If all arguments are NULL, returns NULL. Requires at least two arguments.
COALESCE uses short-circuit evaluation: arguments to the right of the first non-NULL value are not evaluated.
SELECT coalesce(NULL, NULL, 'third');
-- third
SELECT coalesce(NULL, NULL);
-- NULL
-- Pick the best available contact method
CREATE TABLE contacts (name TEXT, phone TEXT, email TEXT);
INSERT INTO contacts VALUES ('Alice', NULL, '[email protected]');
INSERT INTO contacts VALUES ('Bob', '555-1234', NULL);
INSERT INTO contacts VALUES ('Charlie', NULL, NULL);
SELECT name, coalesce(phone, email, 'no contact info') AS contact
FROM contacts;
-- Alice | [email protected]
-- Bob | 555-1234
-- Charlie | no contact info
nullif(x, y) -> value
Returns x if x and y are different. Returns NULL if x and y are equal. The comparison uses the same rules as the = operator.
A common use of NULLIF is to convert a sentinel value (such as zero or an empty string) into NULL, which can then be handled by COALESCE, IFNULL, or aggregate functions that skip NULL values.
SELECT nullif(5, 5);
-- NULL
SELECT nullif(5, 8);
-- 5
-- Prevent division by zero (dividing by NULL yields NULL instead of an error)
SELECT 100.0 / nullif(0, 0);
-- NULL
SELECT 100.0 / nullif(5, 0);
-- 20.0
-- Convert zero prices to NULL
CREATE TABLE products (name TEXT, price REAL, stock INTEGER);
INSERT INTO products VALUES ('Widget', 25.99, 100);
INSERT INTO products VALUES ('Gadget', 0, 50);
INSERT INTO products VALUES ('Doohickey', 15.50, 0);
SELECT name, nullif(price, 0) AS nonzero_price
FROM products;
-- Widget | 25.99
-- Gadget | NULL
-- Doohickey | 15.5
ifnull(x, y) -> value
Returns x if x is not NULL, otherwise returns y. This is equivalent to coalesce(x, y) with exactly two arguments.
SELECT ifnull(NULL, 'fallback');
-- fallback
SELECT ifnull('present', 'fallback');
-- present
-- Provide a default for missing phone numbers
CREATE TABLE contacts (name TEXT, phone TEXT, email TEXT);
INSERT INTO contacts VALUES ('Alice', NULL, '[email protected]');
INSERT INTO contacts VALUES ('Bob', '555-1234', NULL);
INSERT INTO contacts VALUES ('Charlie', NULL, NULL);
SELECT name, ifnull(phone, 'N/A') AS phone
FROM contacts;
-- Alice | N/A
-- Bob | 555-1234
-- Charlie | N/A
| Need | Use | Example |
|---|---|---|
| Multi-way branching | CASE WHEN ... THEN ... END | Classify rows into categories |
| Match a value against a list | CASE expr WHEN ... THEN ... END | Map status codes to labels |
| Simple if/else in one line | iif(cond, a, b) | Toggle between two values |
| First non-NULL from a list | coalesce(a, b, c) | Pick best available contact |
| Convert a value to NULL | nullif(x, sentinel) | Turn 0 into NULL before division |
| Default for a single NULL | ifnull(x, default) | Replace NULL with a placeholder |
Turso provides three pattern-matching operators for comparing strings against patterns: LIKE, GLOB, and REGEXP. Each uses different wildcard conventions and case-sensitivity rules. All three can be negated with the NOT keyword.
expr [NOT] LIKE pattern [ESCAPE escape-char]
expr [NOT] GLOB pattern
expr [NOT] REGEXP pattern
Each operator returns 1 (true) if the string matches the pattern, 0 (false) if it does not, or NULL if either operand is NULL.
The LIKE operator performs pattern matching using two wildcard characters:
| Wildcard | Meaning |
|---|---|
% | Matches any sequence of zero or more characters. |
_ | Matches exactly one character. |
Any other character in the pattern matches itself. LIKE is case-insensitive for ASCII characters by default – 'a' LIKE 'A' evaluates to true. For Unicode characters outside the ASCII range, LIKE is case-sensitive: 'ä' LIKE 'Ä' evaluates to false.
-- % matches any sequence of characters
SELECT 'sweater' LIKE 'sweat%'; -- 1
SELECT 'sweatshirt' LIKE 'sweat%'; -- 1
SELECT 'hat' LIKE 'sweat%'; -- 0
-- _ matches exactly one character
SELECT 'sweater' LIKE 'sweat_r'; -- 1
SELECT 'sweatshirt' LIKE 'sweat_r'; -- 0
-- Case-insensitive for ASCII
SELECT 'sweater' LIKE 'SWEAT%'; -- 1
SELECT 'sweater' LIKE 'SwEaT_R'; -- 1
The % and _ wildcards can be combined to build expressive patterns. Use % at the beginning and end of a pattern to search for a substring anywhere in a string:
SELECT 'hello world' LIKE '%world%'; -- 1
SELECT 'hello world' LIKE '%xyz%'; -- 0
To match a literal % or _ character in a pattern, use the ESCAPE clause. The escape character causes the next %, _, or the escape character itself to be treated as a literal rather than a wildcard.
-- Match a literal % character using \ as the escape character
SELECT '10%' LIKE '10\%' ESCAPE '\'; -- 1
SELECT '10x' LIKE '10\%' ESCAPE '\'; -- 0
The escape character must be a single character. If the ESCAPE value is NULL, the entire LIKE expression evaluates to NULL.
-- Any single character can serve as the escape character
SELECT 'a%bc' LIKE 'a5%%' ESCAPE '5'; -- 1
Prefixing with NOT inverts the result:
SELECT 'sweater' NOT LIKE 'sweat%'; -- 0
SELECT 'hat' NOT LIKE 'sweat%'; -- 1
The LIKE operator can also be invoked as a function. The infix expression X LIKE Y is equivalent to like(Y, X), and X LIKE Y ESCAPE Z is equivalent to like(Y, X, Z). Note that the pattern is the first argument in function form.
SELECT like('sweat%', 'sweater'); -- 1
SELECT like('abcX%', 'abc%', 'X'); -- 1
The GLOB operator is similar to LIKE but uses Unix file-globbing syntax for wildcards and is case-sensitive.
| Wildcard | Meaning |
|---|---|
* | Matches any sequence of zero or more characters (like % in LIKE). |
? | Matches exactly one character (like _ in LIKE). |
[chars] | Matches one character from the set or range inside the brackets. |
[^chars] | Matches one character not in the set or range inside the brackets. |
Because GLOB is case-sensitive, 'hello' GLOB 'H*' evaluates to false.
-- * matches any sequence of characters
SELECT 'hello' GLOB 'h*'; -- 1
SELECT 'hello' GLOB 'H*'; -- 0 (case-sensitive)
-- ? matches exactly one character
SELECT 'hello' GLOB '?ello'; -- 1
SELECT 'hello' GLOB '??llo'; -- 1
Square brackets define a set of characters to match against a single position. A range can be specified with a hyphen. Use ^ after the opening bracket to negate the set.
-- [cde] matches one character that is c, d, or e
SELECT 'abcdefg' GLOB 'abc[cde]efg'; -- 1 (d matches [cde])
SELECT 'abcbefg' GLOB 'abc[cde]efg'; -- 0 (b not in [cde])
-- [c-e] matches one character in the range c through e
SELECT 'abcdefg' GLOB 'abc[c-e]efg'; -- 1
SELECT 'abcfefg' GLOB 'abc[c-e]efg'; -- 0
-- [^cde] matches one character NOT in the set
SELECT 'abcbefg' GLOB 'abc[^cde]efg'; -- 1
SELECT 'abccefg' GLOB 'abc[^cde]efg'; -- 0
A literal - can be included in a character class by placing it first or last: [-c] matches either - or c.
SELECT '-' GLOB '[-c]'; -- 1
SELECT 'c' GLOB '[-c]'; -- 1
SELECT 'x' GLOB '[-c]'; -- 0
Prefixing with NOT inverts the result:
SELECT 'hello' NOT GLOB 'h*'; -- 0
SELECT 'hello' NOT GLOB 'H*'; -- 1
The infix expression X GLOB Y is equivalent to glob(Y, X). The pattern is the first argument.
SELECT glob('h*', 'hello'); -- 1
SELECT glob('[a-k]*', 'hello'); -- 1
The REGEXP operator tests whether a string matches a regular expression pattern. Turso provides a built-in regexp() function, so REGEXP works without loading an extension.
The infix expression X REGEXP Y is equivalent to regexp(Y, X). The pattern is the first argument in function form.
SELECT 'hello' REGEXP 'h.*o'; -- 1
SELECT 'hello' REGEXP '^world$'; -- 0
REGEXP supports standard regular expression syntax, including character classes, anchors, quantifiers, and escape sequences:
-- Anchors: ^ (start of string) and $ (end of string)
SELECT 'hello' REGEXP '^hello$'; -- 1
SELECT 'hello' REGEXP '^ello'; -- 0
-- Character classes and quantifiers
SELECT '[email protected]' REGEXP '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'; -- 1
-- \d for digits, \b for word boundaries
SELECT '2024-01-15' REGEXP '^\d{4}-\d{2}-\d{2}$'; -- 1
SELECT 'hello world' REGEXP '\bworld\b'; -- 1
Prefixing with NOT inverts the result:
SELECT 'hello' NOT REGEXP 'h.*o'; -- 0
SELECT 'hello' NOT REGEXP '^world$'; -- 1
REGEXP can be used in WHERE clauses, CASE expressions, and subqueries just like any other boolean expression:
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, sku TEXT);
INSERT INTO products VALUES (1, 'Widget A', 'WGT-001'),
(2, 'Gadget B', 'GDT-002'),
(3, 'Widget C', 'WGT-003'),
(4, 'Doohickey', 'DHK-004');
-- Filter rows with REGEXP in WHERE
SELECT name FROM products WHERE sku REGEXP '^WGT';
-- Widget A
-- Widget C
-- Classify rows with REGEXP in CASE
SELECT name,
CASE WHEN name REGEXP '^Widget' THEN 'widget'
ELSE 'other'
END AS category
FROM products;
-- Widget A|widget
-- Gadget B|other
-- Widget C|widget
-- Doohickey|other
For all three operators, if either operand is NULL, the result is NULL:
SELECT NULL LIKE 'hello'; -- NULL (empty result)
SELECT 'hello' LIKE NULL; -- NULL (empty result)
SELECT NULL GLOB 'h*'; -- NULL (empty result)
SELECT NULL REGEXP 'abc'; -- NULL (empty result)
| Feature | LIKE | GLOB | REGEXP |
|---|---|---|---|
| Case sensitivity | Case-insensitive (ASCII) | Case-sensitive | Depends on pattern |
| Zero-or-more wildcard | % | * | .* |
| Single-char wildcard | _ | ? | . |
| Character classes | No | [abc], [a-z], [^abc] | [abc], [a-z], [^abc], \d, \w, etc. |
| Escape clause | ESCAPE keyword | No | \ (backslash) |
| Negation | NOT LIKE | NOT GLOB | NOT REGEXP |
Turso uses a dynamic type system inherited from SQLite. Every value belongs to one of five storage classes – NULL, INTEGER, REAL, TEXT, or BLOB – but columns do not enforce a single type. Instead, each column has a type affinity that recommends how values should be stored. This page covers how type affinities are determined, how values are coerced on insertion, how the CAST expression performs explicit conversions, and how types interact during comparisons.
Every value in Turso has exactly one storage class at any given time. Use the typeof() function to inspect it:
SELECT typeof(42); -- integer
SELECT typeof(3.14); -- real
SELECT typeof('hello'); -- text
SELECT typeof(x'ABCD'); -- blob
SELECT typeof(NULL); -- null
Arithmetic and string operations produce values whose storage class follows from the operation:
SELECT typeof(1 + 1); -- integer
SELECT typeof(1 + 1.0); -- real (integer promoted to real)
SELECT typeof('a' || 'b'); -- text
A column’s type affinity is determined by the declared type name in the CREATE TABLE statement. Affinity is a recommendation, not a constraint – any column can store a value of any storage class.
| Affinity | Behavior on INSERT |
|---|---|
| TEXT | Numeric values are converted to their text representation before storage. |
| NUMERIC | Text that looks like an integer or real number is converted to INTEGER or REAL. |
| INTEGER | Behaves identically to NUMERIC on insertion. Differs from NUMERIC only in CAST expressions. |
| REAL | Like NUMERIC, but integer values are stored as floating-point. |
| BLOB | No conversion. Values are stored exactly as provided. |
The affinity of a column is determined by applying the following rules to the declared type name, in order. The first matching rule wins:
| Rule | Condition | Resulting Affinity |
|---|---|---|
| 1 | Type name contains "INT" | INTEGER |
| 2 | Type name contains "CHAR", "CLOB", or "TEXT" | TEXT |
| 3 | Type name contains "BLOB", or no type is specified | BLOB |
| 4 | Type name contains "REAL", "FLOA", or "DOUB" | REAL |
| 5 | Otherwise | NUMERIC |
The matching is case-insensitive and checks for substrings anywhere in the declared type name. Because rules are applied in order, some type names produce counterintuitive results:
| Declared Type | Matching Rule | Affinity |
|---|---|---|
INT, INTEGER, BIGINT, SMALLINT | 1 (contains “INT”) | INTEGER |
TEXT, VARCHAR(255), CLOB | 2 (contains “TEXT”, “CHAR”, or “CLOB”) | TEXT |
BLOB, (no type) | 3 (contains “BLOB” or empty) | BLOB |
REAL, DOUBLE, FLOAT | 4 (contains “REAL”, “DOUB”, or “FLOA”) | REAL |
NUMERIC, DECIMAL(10,5), BOOLEAN, DATE | 5 (no match above) | NUMERIC |
CHARINT | 1 (contains “INT”, rule 1 before rule 2) | INTEGER |
FLOATING POINT | 1 (contains “INT” in “POINT”) | INTEGER |
STRING | 5 (no match for rules 1-4) | NUMERIC |
When a value is inserted into a column, the column’s affinity determines whether a type conversion is attempted. Conversions only happen if they are lossless and reversible.
CREATE TABLE demo (
t TEXT,
n NUMERIC,
i INTEGER,
r REAL,
b BLOB
);
-- Insert the string '500' into every column
INSERT INTO demo VALUES ('500', '500', '500', '500', '500');
SELECT typeof(t), typeof(n), typeof(i), typeof(r), typeof(b) FROM demo;
-- text|integer|integer|real|text
Here is what happened to the string '500' in each column:
'500' is a well-formed integer literal.-- Insert the integer 500 into every column
DELETE FROM demo;
INSERT INTO demo VALUES (500, 500, 500, 500, 500);
SELECT typeof(t), typeof(n), typeof(i), typeof(r), typeof(b) FROM demo;
-- text|integer|integer|real|integer
'500' because TEXT affinity converts numerics to text.NULL values and BLOB values are never converted by affinity, regardless of the column’s declared type:
DELETE FROM demo;
INSERT INTO demo VALUES (NULL, NULL, NULL, NULL, NULL);
SELECT typeof(t), typeof(n), typeof(i), typeof(r), typeof(b) FROM demo;
-- null|null|null|null|null
DELETE FROM demo;
INSERT INTO demo VALUES (x'01', x'01', x'01', x'01', x'01');
SELECT typeof(t), typeof(n), typeof(i), typeof(r), typeof(b) FROM demo;
-- blob|blob|blob|blob|blob
The CAST expression explicitly converts a value to a different storage class.
CAST(expr AS type-name)
The type-name follows the same affinity determination rules as column type names. If expr is NULL, the result is always NULL.
Converts the value to a 64-bit signed integer.
SELECT CAST(3.14 AS INTEGER); -- 3
SELECT CAST(9.99 AS INTEGER); -- 9
SELECT CAST(-7.8 AS INTEGER); -- -7
SELECT CAST('42' AS INTEGER); -- 42
SELECT CAST('123abc' AS INTEGER); -- 123
SELECT CAST('abc' AS INTEGER); -- 0
SELECT CAST(' 42 ' AS INTEGER); -- 42
SELECT CAST('' AS INTEGER); -- 0
SELECT CAST('99.9' AS INTEGER); -- 99
Converts the value to an 8-byte IEEE 754 floating-point number.
SELECT CAST(42 AS REAL); -- 42.0
SELECT CAST('3.14' AS REAL); -- 3.14
SELECT CAST('123.45abc' AS REAL); -- 123.45
Converts the value to a text string.
SELECT CAST(42 AS TEXT); -- 42
SELECT CAST(x'68656C6C6F' AS TEXT); -- hello
SELECT typeof(CAST(42 AS TEXT)); -- text
Converts the value to a blob.
SELECT typeof(CAST('hello' AS BLOB)); -- blob
CAST to NUMERIC returns either an INTEGER or a REAL, depending on the input:
This is where INTEGER and NUMERIC affinity differ in CAST expressions:
SELECT CAST(4.0 AS INTEGER), typeof(CAST(4.0 AS INTEGER));
-- 4|integer
SELECT CAST(4.0 AS NUMERIC), typeof(CAST(4.0 AS NUMERIC));
-- 4.0|real
CAST(4.0 AS INTEGER) truncates the real value to an integer, while CAST(4.0 AS NUMERIC) preserves the real type because the input is already a REAL.
When Turso compares two values, it may apply affinity conversions to the operands before performing the comparison. The rules depend on the affinities of the expressions being compared.
Before a comparison is performed, the following rules are applied in order:
If one operand has INTEGER, REAL, or NUMERIC affinity and the other has TEXT, BLOB, or no affinity : NUMERIC affinity is applied to the non-numeric operand. This means text that looks like a number is converted to a number before comparing.
If one operand has TEXT affinity and the other has no affinity : TEXT affinity is applied to the operand with no affinity.
Otherwise : No conversion is applied. Values are compared using their existing storage classes.
When comparing values of different storage classes, Turso follows this ordering:
NULL < x evaluates to NULL, not TRUE. Use IS NULL to test for nulls. This ordering applies to ORDER BY and similar contexts.)memcmp ordering.Column affinity can cause the same literal to compare differently depending on the column’s declared type:
CREATE TABLE inventory (
label TEXT,
count NUMERIC
);
INSERT INTO inventory VALUES ('500', '500');
-- TEXT column: integer 40 is converted to text '40' for comparison
-- String comparison: '500' > '40' (compares character by character)
SELECT label < 40, label < 60, label < 600 FROM inventory;
-- 0|1|1
-- NUMERIC column: stored as integer 500, numeric comparison
SELECT count < 40, count < 60, count < 600 FROM inventory;
-- 0|0|1
In the TEXT column example, the integer literal 40 is converted to the string '40' before comparing. In string comparison, '500' < '60' is true because '5' < '6' lexicographically. In the NUMERIC column, the value is already stored as integer 500, so the comparison is purely numeric.
Expressions in SQL do not always carry an affinity. The rules for expression affinity are:
| Expression | Affinity |
|---|---|
| Column reference | Same affinity as the column |
CAST(expr AS type) | Affinity determined by the type name |
Unary + applied to a column (e.g., +column) | No affinity |
| Any operator or function result | No affinity |
The unary + operator is a common technique to strip affinity from a column reference, forcing the value to be compared without automatic type coercion.
The typeof() function returns a string indicating the storage class of its argument. It is the primary tool for inspecting how Turso stores a value.
SELECT typeof(42); -- integer
SELECT typeof(3.14); -- real
SELECT typeof('hello'); -- text
SELECT typeof(x'FF'); -- blob
SELECT typeof(NULL); -- null
SELECT typeof(CAST('42' AS INTEGER)); -- integer
The return value is always one of: 'null', 'integer', 'real', 'text', or 'blob'.
Turso supports the IN, NOT IN, BETWEEN, and NOT BETWEEN operators for testing whether a value belongs to a set or falls within a range. These operators are commonly used in WHERE clauses to filter rows, but they can appear anywhere an expression is allowed.
expr [NOT] IN (value [, ...])
expr [NOT] IN (select-stmt)
The IN operator tests whether the left-hand expression is equal to any value in the right-hand set. The set can be a parenthesized list of scalar values or the result of a subquery. IN returns 1 (true) if a match is found and 0 (false) if no match is found, subject to the NULL handling rules described below.
NOT IN is the logical negation of IN. It returns 1 (true) when the left-hand expression does not match any value in the set, and 0 (false) when a match is found.
When the right-hand side is a subquery, it must return a single column. Each row returned by the subquery is treated as one element in the set.
Turso also allows an empty parenthesized list (). When the right-hand side is an empty set, IN always returns 0 (false) and NOT IN always returns 1 (true), regardless of the left-hand operand – even if it is NULL.
The interaction between IN/NOT IN and NULL values follows three-valued logic. The result depends on whether a match is found, whether the set contains NULL, and whether the left operand is NULL.
| Left operand | Set contains NULL | Match found | Empty set | IN result | NOT IN result |
|---|---|---|---|---|---|
| non-NULL | no | no | no | 0 | 1 |
| non-NULL | no | yes | – | 1 | 0 |
| non-NULL | yes | no | no | NULL | NULL |
| non-NULL | yes | yes | – | 1 | 0 |
| NULL | any | any | no | NULL | NULL |
| any | any | – | yes | 0 | 1 |
Key rules to remember:
IN returns 1 and NOT IN returns 0, regardless of any NULLs in the set.IN and NOT IN. The NULL represents “unknown” – the value might match the unknown element.IN returns 0 and NOT IN returns 1 – even if the left operand is NULL. An empty set cannot contain any value.-- Basic IN with a value list
SELECT 1 IN (1, 2, 3);
-- 1
SELECT 4 IN (1, 2, 3);
-- 0
-- IN with string values
SELECT 'apple' IN ('apple', 'banana', 'cherry');
-- 1
-- NOT IN excludes matching values
SELECT 'grape' NOT IN ('apple', 'banana', 'cherry');
-- 1
-- IN with an empty list always returns 0
SELECT 1 IN ();
-- 0
-- NOT IN with an empty list always returns 1
SELECT NULL NOT IN ();
-- 1
-- NULL handling: left operand is NULL
SELECT NULL IN (1, 2, 3);
-- NULL
-- Match found despite NULLs in the set
SELECT 1 IN (1, 2, NULL);
-- 1
-- No match and set contains NULL: result is NULL
SELECT 4 IN (1, 2, NULL);
-- NULL
-- Filter rows using IN with a value list
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL,
stock INTEGER
);
INSERT INTO products VALUES (1, 'Widget', 'Hardware', 9.99, 100);
INSERT INTO products VALUES (2, 'Gadget', 'Electronics', 24.99, 50);
INSERT INTO products VALUES (3, 'Gizmo', 'Electronics', 49.99, 25);
INSERT INTO products VALUES (4, 'Bolt', 'Hardware', 1.99, 500);
INSERT INTO products VALUES (5, 'Sensor', 'Electronics', 14.99, 75);
SELECT name, category FROM products
WHERE category NOT IN ('Electronics');
-- Widget|Hardware
-- Bolt|Hardware
-- IN with a subquery
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL,
stock INTEGER
);
INSERT INTO products VALUES (1, 'Widget', 'Hardware', 9.99, 100);
INSERT INTO products VALUES (2, 'Gadget', 'Electronics', 24.99, 50);
INSERT INTO products VALUES (3, 'Gizmo', 'Electronics', 49.99, 25);
INSERT INTO products VALUES (4, 'Bolt', 'Hardware', 1.99, 500);
INSERT INTO products VALUES (5, 'Sensor', 'Electronics', 14.99, 75);
CREATE TABLE featured_ids (id INTEGER);
INSERT INTO featured_ids VALUES (2);
INSERT INTO featured_ids VALUES (4);
SELECT name, price FROM products
WHERE id IN (SELECT id FROM featured_ids);
-- Gadget|24.99
-- Bolt|1.99
-- NOT IN with a subquery to find customers with no pending orders
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer TEXT,
amount REAL,
status TEXT
);
INSERT INTO orders VALUES (1, 'Alice', 150.00, 'shipped');
INSERT INTO orders VALUES (2, 'Bob', 75.50, 'pending');
INSERT INTO orders VALUES (3, 'Alice', 200.00, 'delivered');
INSERT INTO orders VALUES (4, 'Carol', 50.00, 'shipped');
INSERT INTO orders VALUES (5, 'Bob', 300.00, 'delivered');
SELECT customer, amount FROM orders
WHERE customer NOT IN (
SELECT customer FROM orders WHERE status = 'pending'
);
-- Alice|150.0
-- Alice|200.0
-- Carol|50.0
expr [NOT] BETWEEN expr AND expr
The BETWEEN operator tests whether a value falls within an inclusive range. The expression:
x BETWEEN y AND z
is equivalent to:
x >= y AND x <= z
except that with BETWEEN, the x expression is evaluated only once. This makes no difference in results but can matter when x is a complex or expensive expression.
NOT BETWEEN inverts the test. The expression x NOT BETWEEN y AND z is equivalent to x < y OR x > z.
BETWEEN works with any data type that supports comparison: integers, reals, text (compared according to the active collation), and blobs.
-- Numeric range check
SELECT 5 BETWEEN 1 AND 10;
-- 1
SELECT 15 BETWEEN 1 AND 10;
-- 0
-- NOT BETWEEN
SELECT 5 NOT BETWEEN 1 AND 10;
-- 0
SELECT 15 NOT BETWEEN 1 AND 10;
-- 1
-- BETWEEN with text values (compared lexicographically)
SELECT 'M' BETWEEN 'A' AND 'Z';
-- 1
SELECT 'banana' BETWEEN 'apple' AND 'cherry';
-- 1
-- Filter products by price range
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL,
stock INTEGER
);
INSERT INTO products VALUES (1, 'Widget', 'Hardware', 9.99, 100);
INSERT INTO products VALUES (2, 'Gadget', 'Electronics', 24.99, 50);
INSERT INTO products VALUES (3, 'Gizmo', 'Electronics', 49.99, 25);
INSERT INTO products VALUES (4, 'Bolt', 'Hardware', 1.99, 500);
INSERT INTO products VALUES (5, 'Sensor', 'Electronics', 14.99, 75);
SELECT name, price FROM products
WHERE price BETWEEN 10.00 AND 30.00;
-- Gadget|24.99
-- Sensor|14.99
-- NOT BETWEEN to find outlier prices
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL,
stock INTEGER
);
INSERT INTO products VALUES (1, 'Widget', 'Hardware', 9.99, 100);
INSERT INTO products VALUES (2, 'Gadget', 'Electronics', 24.99, 50);
INSERT INTO products VALUES (3, 'Gizmo', 'Electronics', 49.99, 25);
INSERT INTO products VALUES (4, 'Bolt', 'Hardware', 1.99, 500);
INSERT INTO products VALUES (5, 'Sensor', 'Electronics', 14.99, 75);
SELECT name, price FROM products
WHERE price NOT BETWEEN 10.00 AND 30.00;
-- Widget|9.99
-- Gizmo|49.99
-- Bolt|1.99
-- BETWEEN with date strings (ISO 8601 format sorts correctly)
CREATE TABLE events (
id INTEGER PRIMARY KEY,
name TEXT,
event_date TEXT
);
INSERT INTO events VALUES (1, 'Launch', '2024-01-15');
INSERT INTO events VALUES (2, 'Review', '2024-03-20');
INSERT INTO events VALUES (3, 'Release', '2024-06-01');
INSERT INTO events VALUES (4, 'Summit', '2024-09-10');
SELECT name, event_date FROM events
WHERE event_date BETWEEN '2024-01-01' AND '2024-06-30';
-- Launch|2024-01-15
-- Review|2024-03-20
-- Release|2024-06-01
IN and BETWEEN can be used together in the same WHERE clause, combined with AND and OR:
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer TEXT,
amount REAL,
status TEXT
);
INSERT INTO orders VALUES (1, 'Alice', 150.00, 'shipped');
INSERT INTO orders VALUES (2, 'Bob', 75.50, 'pending');
INSERT INTO orders VALUES (3, 'Alice', 200.00, 'delivered');
INSERT INTO orders VALUES (4, 'Carol', 50.00, 'shipped');
INSERT INTO orders VALUES (5, 'Bob', 300.00, 'delivered');
-- Orders that are shipped or delivered with amount in a range
SELECT customer, amount FROM orders
WHERE status IN ('shipped', 'delivered')
AND amount BETWEEN 100.00 AND 250.00;
-- Alice|150.0
-- Alice|200.0
Both IN and BETWEEN have defined positions in the operator precedence hierarchy. From highest to lowest among the comparison operators:
| Precedence | Operators |
|---|---|
| Higher | <, >, <=, >= |
=, ==, <>, IS, IS NOT | |
BETWEEN ... AND ... | |
IN, LIKE, GLOB, MATCH, REGEXP | |
| Lower | ISNULL, NOTNULL, NOT NULL |
The NOT keyword that precedes IN or BETWEEN is part of the operator itself (not a separate prefix operator) and does not change the precedence.
expr COLLATE {BINARY | NOCASE | RTRIM}
In column definitions:
column-name type-name COLLATE {BINARY | NOCASE | RTRIM}
In ORDER BY clauses:
ORDER BY expr COLLATE {BINARY | NOCASE | RTRIM} [{ASC | DESC}]
In index definitions:
CREATE INDEX index-name ON table-name (column-name COLLATE {BINARY | NOCASE | RTRIM})
A collation sequence determines how text values are compared and sorted. The COLLATE operator is a unary postfix operator that assigns a collation sequence to an expression, overriding whatever collation would otherwise apply.
Collation sequences affect only text comparisons. Numeric values are always compared numerically, and BLOB values are always compared byte-by-byte regardless of any collation setting.
Turso provides three built-in collation sequences:
| Collation | Behavior |
|---|---|
BINARY | Compares text byte-by-byte using raw byte values. This is the default collation for all columns. Uppercase letters sort before lowercase ('A' < 'a'). |
NOCASE | Same as BINARY, except the 26 uppercase ASCII letters (A-Z) are folded to their lowercase equivalents before comparison. Only ASCII characters are folded; accented or non-Latin characters are not affected. |
RTRIM | Same as BINARY, except trailing space characters are ignored. 'abc' and 'abc ' are considered equal. |
Every column has an associated collation sequence. If no COLLATE clause is specified in the column definition, the default is BINARY.
A COLLATE clause in a column definition sets the default collation for that column. This collation is used whenever the column appears in a comparison or ORDER BY without an explicit COLLATE operator.
CREATE TABLE users (
id INTEGER PRIMARY KEY,
email TEXT COLLATE NOCASE,
username TEXT
);
INSERT INTO users VALUES (1, '[email protected]', 'alice');
INSERT INTO users VALUES (2, '[email protected]', 'bob');
INSERT INTO users VALUES (3, '[email protected]', 'carol');
-- NOCASE column: matches regardless of case
SELECT id, email FROM users WHERE email = '[email protected]';
-- 1|[email protected]
Because email is declared with COLLATE NOCASE, the comparison email = '[email protected]' matches the stored value '[email protected]'.
The COLLATE operator can be applied to any expression to control how a comparison is performed. It has very high precedence – higher than any binary operator – so it binds tightly to its operand.
-- Default BINARY comparison is case-sensitive
SELECT 'Hello' = 'hello';
-- 0
-- COLLATE NOCASE makes the comparison case-insensitive
SELECT 'Hello' = 'hello' COLLATE NOCASE;
-- 1
-- COLLATE RTRIM ignores trailing spaces
SELECT 'hello ' = 'hello' COLLATE RTRIM;
-- 1
When used in a WHERE clause, the COLLATE operator overrides the column’s default collation:
CREATE TABLE items (
id INTEGER PRIMARY KEY,
label TEXT
);
INSERT INTO items VALUES (1, 'abc ');
INSERT INTO items VALUES (2, 'abc');
INSERT INTO items VALUES (3, 'ABC');
INSERT INTO items VALUES (4, 'xyz');
-- Without COLLATE, default BINARY collation is exact
SELECT id FROM items WHERE label = 'abc';
-- 2
-- With COLLATE RTRIM, trailing spaces are ignored
SELECT id FROM items WHERE label = 'abc' COLLATE RTRIM;
-- 1
-- 2
When two values are compared using a binary operator (=, <, >, <=, >=, <>, IS, IS NOT), the collation sequence is determined by these rules, applied in order:
COLLATE operator, that collation is used. If both operands have explicit COLLATE operators, the leftmost one wins.CREATE TABLE), that column’s collation is used. If both operands are columns, the left operand’s collation takes precedence.BINARY collation is used.CREATE TABLE contacts (
id INTEGER PRIMARY KEY,
name TEXT COLLATE NOCASE,
tag TEXT
);
INSERT INTO contacts VALUES (1, 'alice', 'alpha');
INSERT INTO contacts VALUES (2, 'Alice', 'Alpha');
INSERT INTO contacts VALUES (3, 'BOB', 'beta');
-- Column collation applies: name is NOCASE, so 'alice' matches 'Alice'
SELECT id FROM contacts WHERE name = 'alice';
-- 1
-- 2
-- Default BINARY: tag has no COLLATE, so comparison is case-sensitive
SELECT id FROM contacts WHERE tag = 'alpha';
-- 1
-- Explicit COLLATE overrides defaults
SELECT id FROM contacts WHERE tag = 'alpha' COLLATE NOCASE;
-- 1
-- 2
The ORDER BY clause uses collation to determine sort order for text values. The collation is chosen as follows:
ORDER BY expression has an explicit COLLATE clause, that collation is used.ORDER BY expression is a column, the column’s defined collation is used.BINARY is used.CREATE TABLE contacts (
id INTEGER PRIMARY KEY,
name TEXT COLLATE NOCASE,
tag TEXT
);
INSERT INTO contacts VALUES (1, 'alice', 'alpha');
INSERT INTO contacts VALUES (2, 'Alice', 'Alpha');
INSERT INTO contacts VALUES (3, 'BOB', 'beta');
INSERT INTO contacts VALUES (4, 'bob', 'Beta');
INSERT INTO contacts VALUES (5, 'Carol', 'gamma');
-- ORDER BY tag with default BINARY: uppercase sorts before lowercase
SELECT id, tag FROM contacts ORDER BY tag COLLATE BINARY;
-- 2|Alpha
-- 4|Beta
-- 1|alpha
-- 3|beta
-- 5|gamma
-- ORDER BY tag with NOCASE: case-insensitive sort
SELECT id, tag FROM contacts ORDER BY tag COLLATE NOCASE;
-- 1|alpha
-- 2|Alpha
-- 3|beta
-- 4|Beta
-- 5|gamma
A COLLATE clause can be specified on indexed columns. This is useful when you want an index to support case-insensitive lookups on a column that does not itself have a NOCASE collation.
CREATE TABLE users (
id INTEGER PRIMARY KEY,
email TEXT,
username TEXT
);
INSERT INTO users VALUES (1, '[email protected]', 'alice');
INSERT INTO users VALUES (2, '[email protected]', 'bob');
INSERT INTO users VALUES (3, '[email protected]', 'carol');
CREATE INDEX idx_email ON users(email COLLATE NOCASE);
SELECT id, email FROM users WHERE email = '[email protected]' COLLATE NOCASE;
-- 1|[email protected]
Collation sequences also affect grouping. When COLLATE NOCASE is applied to a grouping expression, values that differ only in case are placed in the same group.
CREATE TABLE words (id INTEGER PRIMARY KEY, word TEXT);
INSERT INTO words VALUES (1, 'apple');
INSERT INTO words VALUES (2, 'Apple');
INSERT INTO words VALUES (3, 'APPLE');
INSERT INTO words VALUES (4, 'banana');
-- GROUP BY with NOCASE: all case variants of 'apple' form one group
SELECT word COLLATE NOCASE, COUNT(*) AS cnt
FROM words
GROUP BY word COLLATE NOCASE;
-- apple|3
-- banana|1
-- DISTINCT with NOCASE: collapses case variants
SELECT DISTINCT word COLLATE NOCASE FROM words;
-- apple
-- banana
The COLLATE operator can be combined with the IN operator. Attach COLLATE to the left-hand expression to control how membership is tested.
SELECT 'hello' COLLATE NOCASE IN ('Hello', 'World');
-- 1
Turso supports the three built-in collation sequences (BINARY, NOCASE, RTRIM) and the COLLATE operator in expressions, column definitions, ORDER BY, and CREATE INDEX. Custom collation sequences registered via the C API (sqlite3_create_collation) are not supported. The PRAGMA collation_list command is not available.
Scalar functions accept zero or more arguments and return a single value. They can be used anywhere an expression is allowed: in SELECT lists, WHERE clauses, ORDER BY, and so on.
abs(X) -> numeric
Returns the absolute value of X. Returns NULL if X is NULL. Returns 0.0 if X is a string that cannot be converted to a number.
SELECT abs(-42); -- 42
SELECT abs(3.14); -- 3.14
SELECT abs(NULL); -- NULL
char(X1, X2, …, XN) -> text
Returns a string composed of characters having the Unicode code points given by the integer arguments.
SELECT char(72, 101, 108, 108, 111); -- 'Hello'
SELECT char(9731); -- snowman character
coalesce(X, Y, …) -> value
Returns the first non-NULL argument. Requires at least two arguments. Returns NULL only if every argument is NULL.
SELECT coalesce(NULL, NULL, 'hello'); -- 'hello'
SELECT coalesce(1, 2, 3); -- 1
concat(X, …) -> text
Returns a string formed by concatenating the text representations of all non-NULL arguments. NULL arguments are silently skipped. Returns an empty string if all arguments are NULL.
SELECT concat('Hello', ' ', 'World'); -- 'Hello World'
SELECT concat(NULL, 'hello'); -- 'hello'
concat_ws(separator, X, …) -> text
Returns a string formed by concatenating the non-NULL arguments after the first, using the first argument as a separator. Returns NULL if the separator is NULL.
SELECT concat_ws('-', '2024', '01', '15'); -- '2024-01-15'
SELECT concat_ws(', ', 'Alice', 'Bob'); -- 'Alice, Bob'
SELECT concat_ws(NULL, 'a', 'b'); -- NULL
format(FORMAT, …) -> text
Returns a string formed by substituting arguments into the format string, following printf conventions. Supports %s (string), %d (integer), %f (floating-point), and other standard format specifiers. Returns NULL if FORMAT is NULL.
SELECT format('%s has %d items', 'cart', 5); -- 'cart has 5 items'
SELECT format('%.2f', 3.14159); -- '3.14'
glob(pattern, string) -> integer
Tests whether the string matches the glob pattern. Equivalent to the expression string GLOB pattern. Returns 1 for a match, 0 otherwise. Glob patterns use * to match any sequence of characters and ? to match any single character. Matching is case-sensitive.
SELECT glob('*ello', 'Hello'); -- 1
SELECT glob('H?llo', 'Hello'); -- 1
hex(X) -> text
Returns an uppercase hexadecimal string rendering of the content of X. If X is a text string, each character is converted to its UTF-8 byte representation. If X is a blob, the raw bytes are converted.
SELECT hex('Hello'); -- '48656C6C6F'
SELECT hex(X'CAFE'); -- 'CAFE'
SELECT hex(zeroblob(4)); -- '00000000'
ifnull(X, Y) -> value
Returns X if X is not NULL, otherwise returns Y. Equivalent to coalesce(X, Y) but restricted to exactly two arguments.
SELECT ifnull(NULL, 'backup'); -- 'backup'
SELECT ifnull('value', 42); -- 'value'
iif(condition, true_value, false_value) -> value
Returns true_value if condition is true, or false_value if condition is false or NULL. The if keyword is an alias for iif.
SELECT iif(1, 'yes', 'no'); -- 'yes'
SELECT iif(0, 'yes', 'no'); -- 'no'
SELECT if(1 > 0, 'positive', 'non-positive'); -- 'positive'
instr(X, Y) -> integer
Returns the 1-based position of the first occurrence of Y within X, or 0 if Y is not found. Returns NULL if either argument is NULL. Both arguments must be the same type (both text or both blob).
SELECT instr('Hello World', 'World'); -- 7
SELECT instr('Hello World', 'xyz'); -- 0
SELECT instr('hello', NULL); -- NULL
last_insert_rowid() -> integer
Returns the ROWID of the most recent successful INSERT into a rowid table from the same database connection. Returns 0 if no INSERT has occurred.
SELECT last_insert_rowid(); -- 0 (before any inserts)
length(X) -> integer
Returns the number of characters in X if X is a text string, or the number of bytes if X is a blob. Returns NULL if X is NULL.
SELECT length('Hello'); -- 5
SELECT length(X'0102030405'); -- 5
SELECT length(NULL); -- NULL
like(pattern, string) -> integer
Tests whether the string matches the LIKE pattern. Equivalent to string LIKE pattern. The % wildcard matches any sequence of characters and _ matches any single character. Matching is case-insensitive for ASCII characters.
SELECT like('H%', 'Hello'); -- 1
SELECT like('H_llo', 'Hello'); -- 1
SELECT like('%world%', 'Hello World'); -- 1
likelihood(X, P) -> value
Returns X unchanged. The second argument P is a probability hint (a floating-point number between 0.0 and 1.0) for the query planner, indicating the likelihood that X is true. This function has no effect on the result, only on query optimization.
SELECT likelihood(1, 0.5); -- 1
likely(X) -> value
Returns X unchanged. Provides a hint to the query planner that X is usually true (equivalent to likelihood(X, 0.9375)). This function has no effect on the result, only on query optimization.
SELECT likely(1); -- 1
lower(X) -> text
Returns a copy of X with all ASCII uppercase characters converted to lowercase. Non-ASCII characters are unchanged.
SELECT lower('HELLO'); -- 'hello'
SELECT lower('Hello World'); -- 'hello world'
ltrim(X) -> textltrim(X, Y) -> textrtrim(X) -> textrtrim(X, Y) -> texttrim(X) -> text****trim(X, Y) -> text
ltrim removes characters from the left side of X. rtrim removes from the right side. trim removes from both sides. With one argument, spaces are removed. With two arguments, all characters found in Y are removed from the respective side(s).
SELECT ltrim(' Hello'); -- 'Hello'
SELECT ltrim('xxxHello', 'x'); -- 'Hello'
SELECT rtrim('Hello '); -- 'Hello'
SELECT rtrim('Helloxxxx', 'x'); -- 'Hello'
SELECT trim(' Hello '); -- 'Hello'
SELECT trim('xxHelloxx', 'x'); -- 'Hello'
max(X, Y, …) -> value
Returns the argument with the maximum value when given two or more arguments. Uses the standard comparison rules to determine ordering. Returns NULL if any argument is NULL. (With a single argument, max acts as an aggregate function instead.)
SELECT max(1, 5, 3); -- 5
SELECT max('alpha', 'beta'); -- 'beta'
min(X, Y, …) -> value
Returns the argument with the minimum value when given two or more arguments. Uses the standard comparison rules to determine ordering. Returns NULL if any argument is NULL. (With a single argument, min acts as an aggregate function instead.)
SELECT min(1, 5, 3); -- 1
SELECT min('alpha', 'beta'); -- 'alpha'
nullif(X, Y) -> value
Returns X if X and Y are different, or NULL if they are equal. Useful for converting sentinel values into NULLs.
SELECT nullif(5, 5); -- NULL
SELECT nullif(5, 3); -- 5
SELECT nullif('', ''); -- NULL
octet_length(X) -> integer
Returns the number of bytes in X. Unlike length, which counts characters for text values, octet_length always counts bytes. Returns NULL if X is NULL.
SELECT octet_length('Hello'); -- 5
SELECT octet_length(X'0102030405'); -- 5
printf(FORMAT, …) -> text
Alias for format. Returns a string formed by substituting arguments into the format string using printf conventions. Returns NULL if FORMAT is NULL.
SELECT printf('%d items', 5); -- '5 items'
SELECT printf('%.2f', 3.14159); -- '3.14'
quote(X) -> text
Returns a string that is the SQL literal representation of X. Text strings are surrounded by single quotes with internal quotes doubled. Blobs are returned as hex literals. NULL returns the string NULL. Integers and reals are returned as-is.
SELECT quote('Hello'); -- 'Hello' (with enclosing quotes)
SELECT quote(3.14); -- 3.14
SELECT quote(NULL); -- NULL
SELECT quote(X'48656C6C6F'); -- X'48656C6C6F'
random() -> integer
Returns a pseudo-random integer between -9223372036854775808 and +9223372036854775807. A different value is returned each time the function is called.
SELECT typeof(random()); -- 'integer'
randomblob(N) -> blob
Returns an N-byte blob containing pseudo-random bytes. Useful for generating unique identifiers or random data.
SELECT length(randomblob(16)); -- 16
SELECT hex(randomblob(4)); -- (random 8-character hex string)
replace(X, Y, Z) -> text
Returns a copy of X with every occurrence of Y replaced by Z. If Y is an empty string, X is returned unchanged.
SELECT replace('Hello World', 'World', 'Turso'); -- 'Hello Turso'
SELECT replace('aabbcc', 'bb', 'XX'); -- 'aaXXcc'
round(X) -> real****round(X, Y) -> real
Rounds X to Y decimal places. If Y is omitted, it defaults to 0. The result is always a floating-point value.
SELECT round(3.14159); -- 3.0
SELECT round(3.14159, 2); -- 3.14
SELECT round(123.5); -- 124.0
sign(X) -> integer
Returns -1 for negative values, 0 for zero, or +1 for positive values. Returns NULL if X is NULL or is a string or blob that cannot be converted to a number.
SELECT sign(-42); -- -1
SELECT sign(0); -- 0
SELECT sign(42); -- 1
SELECT sign(NULL); -- NULL
soundex(X) -> text
Returns the Soundex encoding of X. The Soundex encoding is a four-character string that represents the phonetic sound of the input. Returns ?000 if X is NULL or contains no ASCII letters.
SELECT soundex('Robert'); -- 'R163'
SELECT soundex('Rupert'); -- 'R163'
sqlite_source_id() -> text
Returns a string that identifies the specific version and build of the library. The format includes a date, time, and a SHA hash.
SELECT sqlite_source_id();
-- e.g. '2026-02-11 18:26:30 fd3ab2fd48b711aa9bec80562dd1175ce10f4d9a'
sqlite_version() -> text
Returns the version string for the SQLite-compatible library.
SELECT sqlite_version(); -- e.g. '3.50.4'
substr(X, Y) -> textsubstr(X, Y, Z) -> textsubstring(X, Y) -> text****substring(X, Y, Z) -> text
Returns a substring of X starting at position Y (1-based). If Z is given, it specifies the length of the substring; otherwise the substring extends to the end of the string. substring is an alias for substr.
If Y is negative, the substring starts that many characters from the end. If Z is negative, the substring consists of the Z characters preceding (and including) position Y.
SELECT substr('Hello World', 7); -- 'World'
SELECT substr('Hello World', 1, 5); -- 'Hello'
SELECT substring('Hello World', 7); -- 'World'
typeof(X) -> text
Returns the type of X as a string. The possible return values are 'null', 'integer', 'real', 'text', and 'blob'.
SELECT typeof(42); -- 'integer'
SELECT typeof(3.14); -- 'real'
SELECT typeof('Hello'); -- 'text'
SELECT typeof(NULL); -- 'null'
SELECT typeof(X'CAFE'); -- 'blob'
unhex(X) -> blob****unhex(X, Y) -> blob
Converts the hexadecimal string X into a blob. Returns NULL if X contains characters that are not hexadecimal digits, unless those characters appear in Y (the set of characters to ignore). Returns NULL if X or Y is NULL.
SELECT unhex('48656C6C6F'); -- Hello (as blob)
SELECT hex(unhex('48656C6C6F')); -- '48656C6C6F'
SELECT unhex('48GG', 'G'); -- H (as blob, 'G' chars are ignored)
unicode(X) -> integer
Returns the numeric Unicode code point of the first character of X. Returns NULL if X is an empty string.
SELECT unicode('A'); -- 65
SELECT unicode('Hello'); -- 72
unlikely(X) -> value
Returns X unchanged. Provides a hint to the query planner that X is usually false (equivalent to likelihood(X, 0.0625)). This function has no effect on the result, only on query optimization.
SELECT unlikely(1); -- 1
upper(X) -> text
Returns a copy of X with all ASCII lowercase characters converted to uppercase. Non-ASCII characters are unchanged.
SELECT upper('hello'); -- 'HELLO'
SELECT upper('Hello World'); -- 'HELLO WORLD'
zeroblob(N) -> blob
Returns a blob consisting of N zero-valued bytes (0x00). Useful as a placeholder for blob values to be filled in later with incremental I/O.
SELECT length(zeroblob(10)); -- 10
SELECT hex(zeroblob(4)); -- '00000000'
Aggregate functions compute a single result from a set of input values. They are most commonly used with a GROUP BY clause to produce one result per group of rows, but when used without GROUP BY, they treat the entire result set as a single group and return one row.
All built-in aggregate functions ignore NULL inputs (except count(*)). If every input to an aggregate function is NULL, the result is NULL – with the exceptions of count() (which returns 0) and total() (which returns 0.0). This NULL-skipping behavior is consistent with the SQL standard.
Aggregate functions can be combined with GROUP BY and HAVING to filter groups after aggregation. See GROUP BY and HAVING for details on grouping.
The DISTINCT keyword can precede the argument of any single-argument aggregate to eliminate duplicate values before processing:
SELECT count(DISTINCT category) FROM products;
The examples below use the following tables:
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
price REAL
);
INSERT INTO products VALUES
(1, 'Laptop', 'Electronics', 999.99),
(2, 'Phone', 'Electronics', 699.99),
(3, 'Desk', 'Furniture', 299.99),
(4, 'Chair', 'Furniture', 199.99),
(5, 'Tablet', 'Electronics', 499.99);
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
user_id INTEGER,
product TEXT,
amount REAL
);
INSERT INTO orders VALUES
(1, 1, 'Laptop', 999.99),
(2, 1, 'Phone', 699.99),
(3, 2, 'Desk', 299.99),
(4, 3, 'Chair', 199.99),
(5, 1, 'Tablet', 499.99);
avg(X) -> real
Returns the average of all non-NULL values of X. The result is always a floating-point value when there is at least one non-NULL input. String and BLOB values that do not look like numbers are treated as 0. Returns NULL if all inputs are NULL or if the input set is empty. Internally, avg(X) is equivalent to total(X) / count(X).
SELECT avg(amount) FROM orders;
-- 539.99
SELECT user_id, avg(amount) AS avg_order
FROM orders
GROUP BY user_id;
-- 1 | 733.323333333333
-- 2 | 299.99
-- 3 | 199.99
The DISTINCT keyword causes duplicate values to be removed before computing the average:
SELECT avg(DISTINCT x) FROM (SELECT 1 AS x UNION ALL SELECT 2 UNION ALL SELECT 2 UNION ALL SELECT 3);
-- 2.0
count(X) -> integer****count(*) -> integer
The count(X) form returns the number of times X is not NULL. The count(*) form returns the total number of rows in the group, regardless of NULL values.
count() always returns an integer and never returns NULL. For an empty input set, both forms return 0.
SELECT count(*) FROM orders;
-- 5
SELECT count(amount) FROM orders;
-- 5
SELECT count(DISTINCT user_id) FROM orders;
-- 3
SELECT user_id, count(*) AS order_count
FROM orders
GROUP BY user_id;
-- 1 | 3
-- 2 | 1
-- 3 | 1
group_concat(X) -> text****group_concat(X, Y) -> text
Returns a string formed by concatenating all non-NULL values of X, separated by Y. If Y is omitted, a comma (",") is used as the default separator. Returns NULL if all inputs are NULL.
The order of concatenated elements is determined by the order in which rows are processed. When used with GROUP BY, the order within each group is not guaranteed unless the query uses a subquery or other mechanism to control row ordering.
SELECT group_concat(product) FROM orders;
-- Laptop,Phone,Desk,Chair,Tablet
SELECT group_concat(product, ' | ') FROM orders;
-- Laptop | Phone | Desk | Chair | Tablet
SELECT user_id, group_concat(product, ', ') AS products
FROM orders
GROUP BY user_id;
-- 1 | Laptop, Phone, Tablet
-- 2 | Desk
-- 3 | Chair
string_agg(X, Y) -> text
An alias for group_concat(X, Y). Returns a string formed by concatenating all non-NULL values of X, separated by Y. Unlike group_concat, the separator argument Y is required. This form provides compatibility with PostgreSQL and SQL Server, which use string_agg rather than group_concat.
SELECT string_agg(product, ', ') FROM orders;
-- Laptop, Phone, Desk, Chair, Tablet
max(X) -> value
Returns the maximum value of all non-NULL values of X. The maximum is determined by the sort order that would be used by ORDER BY on the same column. Returns NULL if all inputs are NULL or if the input set is empty.
max() works on any type: integers, reals, text, and BLOBs are compared using the normal comparison rules.
SELECT max(amount) FROM orders;
-- 999.99
SELECT max(name) FROM products;
-- Tablet
SELECT category, max(price) AS most_expensive
FROM products
GROUP BY category;
-- Electronics | 999.99
-- Furniture | 299.99
min(X) -> value
Returns the minimum non-NULL value of X. The minimum is the value that would appear first in an ORDER BY on the same column. Returns NULL if all inputs are NULL or if the input set is empty.
min() works on any type: integers, reals, text, and BLOBs are compared using the normal comparison rules.
SELECT min(amount) FROM orders;
-- 199.99
SELECT min(name) FROM products;
-- Chair
SELECT category, min(price) AS cheapest
FROM products
GROUP BY category;
-- Electronics | 499.99
-- Furniture | 199.99
sum(X) -> integer or real
Returns the sum of all non-NULL values of X. If all inputs are integers, the result is an integer. If any input is a real number, the result is a real. Returns NULL if all inputs are NULL or if the input set is empty.
An integer overflow error is raised if all inputs are integers and the sum exceeds the integer range.
SELECT sum(amount) FROM orders;
-- 2699.95
-- sum returns integer type when all inputs are integers
SELECT typeof(sum(x)) FROM (SELECT 1 AS x UNION ALL SELECT 2 UNION ALL SELECT 3);
-- integer
-- sum returns real type when any input is real
SELECT typeof(sum(x)) FROM (SELECT 1.0 AS x UNION ALL SELECT 2.0 UNION ALL SELECT 3.0);
-- real
SELECT user_id, sum(amount) AS total_spent
FROM orders
GROUP BY user_id
HAVING total_spent > 500;
-- 1 | 2199.97
The DISTINCT keyword causes duplicate values to be removed before summing:
SELECT sum(DISTINCT x) FROM (SELECT 1 AS x UNION ALL SELECT 2 UNION ALL SELECT 2 UNION ALL SELECT 3);
-- 6
total(X) -> real
Returns the sum of all non-NULL values of X, similar to sum(), but with two key differences:
total() always returns a floating-point value (0.0 for an empty set), whereas sum() returns NULL for an empty set.total() never raises an integer overflow error.Use total() when you need a guaranteed numeric result and want to avoid NULL checks.
| Behavior | sum(X) | total(X) |
|---|---|---|
| Empty set result | NULL | 0.0 |
| Return type (integer inputs) | integer | real |
| Return type (real inputs) | real | real |
| Integer overflow | raises error | no error |
SELECT total(amount) FROM orders;
-- 2699.95
-- total returns 0.0 on an empty table; sum returns NULL
CREATE TABLE empty (x INTEGER);
SELECT sum(x), total(x) FROM empty;
-- (NULL) | 0.0
-- total always returns real type, even with integer inputs
SELECT typeof(total(x)) FROM (SELECT 1 AS x UNION ALL SELECT 2 UNION ALL SELECT 3);
-- real
The following aggregate functions are provided by the percentile extension. They are not built-in and must be loaded explicitly before use. Once loaded, they behave like any other aggregate function and can be used with GROUP BY, HAVING, and DISTINCT.
.load liblimbo_percentile
median(X) -> real
Returns the median value of all non-NULL values of X. For an odd number of values, this is the middle value. For an even number of values, this is the average of the two middle values. Returns NULL if all inputs are NULL.
.load liblimbo_percentile
CREATE TABLE scores (value REAL);
INSERT INTO scores VALUES (1.0), (2.0), (3.0), (4.0), (5.0);
SELECT median(value) FROM scores;
-- 3.0
percentile(Y, P) -> real
Returns the value at the P-th percentile among all non-NULL values of Y, using linear interpolation. P is expressed on a 0-to-100 scale (e.g., 50 for the median) and must be the same value for every row in the group. An error is raised if P is outside the range 0 to 100 or if different rows supply different values of P. Returns NULL if all inputs are NULL.
SELECT percentile(value, 50) FROM scores;
-- 3.0
percentile_cont(Y, P) -> real
Computes a percentile using continuous distribution, following the SQL standard PERCENTILE_CONT semantics. P is expressed on a 0.0-to-1.0 scale (e.g., 0.5 for the median) and must be the same value for every row in the group. Uses linear interpolation between adjacent values when the percentile falls between two data points. Returns NULL if all inputs are NULL.
SELECT percentile_cont(value, 0.5) FROM scores;
-- 3.0
percentile_disc(Y, P) -> real
Computes a percentile using discrete distribution, following the SQL standard PERCENTILE_DISC semantics. P is expressed on a 0.0-to-1.0 scale and must be the same value for every row in the group. Unlike percentile_cont, this function always returns an actual value from the input set rather than interpolating between values. Returns NULL if all inputs are NULL.
SELECT percentile_disc(value, 0.5) FROM scores;
-- 3.0
stddev(X) -> real
Returns the population standard deviation of all non-NULL values of X. The standard deviation measures how spread out values are from their mean. Returns NULL if all inputs are NULL or if the input set is empty.
SELECT stddev(value) FROM scores;
-- 1.58113883008419
Turso provides seven date and time functions for computing dates, times, and timestamps. All functions operate on UTC internally and accept an optional list of modifiers that transform the result.
date(time-value, modifier, …) -> TEXT
Returns the date as text in the format YYYY-MM-DD.
SELECT date('2024-06-15 14:30:00'); -- 2024-06-15
SELECT date('2024-06-15', '+10 days'); -- 2024-06-25
SELECT date('2024-06-15', '-30 days'); -- 2024-05-16
time(time-value, modifier, …) -> TEXT
Returns the time as text in the format HH:MM:SS.
SELECT time('2024-06-15 14:30:45'); -- 14:30:45
SELECT time('14:30:00'); -- 14:30:00
datetime(time-value, modifier, …) -> TEXT
Returns the date and time as text in the format YYYY-MM-DD HH:MM:SS. When the subsec modifier is present, the output includes fractional seconds: YYYY-MM-DD HH:MM:SS.SSS.
SELECT datetime('2024-06-15 14:30:00'); -- 2024-06-15 14:30:00
SELECT datetime('2024-06-15 14:30:00', '+3 hours'); -- 2024-06-15 17:30:00
SELECT datetime('2024-06-15 14:30:00', 'subsec'); -- 2024-06-15 14:30:00.000
Converting a Unix timestamp to a human-readable datetime:
SELECT datetime(1718461800, 'unixepoch'); -- 2024-06-15 14:30:00
julianday(time-value, modifier, …) -> REAL
Returns the Julian day number – the fractional number of days since noon in Greenwich on November 24, 4714 B.C. (Proleptic Gregorian calendar).
SELECT julianday('2024-06-15'); -- 2460476.5
Compute the number of days between two dates:
SELECT julianday('2024-06-15') - julianday('2024-01-01'); -- 166.0
unixepoch(time-value, modifier, …) -> INTEGER
Returns the number of seconds since 1970-01-01 00:00:00 UTC. Returns an integer by default; use the subsec modifier for a floating-point result with fractional seconds.
SELECT unixepoch('2024-06-15 14:30:00'); -- 1718461800
Compute the number of seconds between two timestamps:
SELECT unixepoch('2024-06-15 14:30:00') - unixepoch('2024-06-15 12:00:00'); -- 9000
strftime(format, time-value, modifier, …) -> TEXT
Returns the date formatted according to the format string specified as the first argument. The format string supports the substitutions listed in the Format Specifiers table below.
SELECT strftime('%Y/%m/%d', '2024-06-15'); -- 2024/06/15
SELECT strftime('%H:%M', '2024-06-15 14:30:00'); -- 14:30
SELECT strftime('%j', '2024-06-15'); -- 167
SELECT strftime('%s', '2024-06-15 14:30:00'); -- 1718461800
SELECT strftime('%J', '2024-06-15'); -- 2460476.5
The other date/time functions can be expressed as strftime equivalents:
| Function | strftime Equivalent |
|---|---|
date(...) | strftime('%F', ...) |
time(...) | strftime('%T', ...) |
datetime(...) | strftime('%F %T', ...) |
julianday(...) | CAST(strftime('%J', ...) AS REAL) |
unixepoch(...) | CAST(strftime('%s', ...) AS INT) |
timediff(time-value-A, time-value-B) -> TEXT
Returns a text string describing the amount of time that must be added to the second argument to reach the first. The result has the format (+|-)YYYY-MM-DD HH:MM:SS.SSS.
The timediff function does not accept modifiers. It satisfies the invariant that datetime(A) equals datetime(B, timediff(A, B)).
SELECT timediff('2024-06-15', '2024-01-01'); -- +0000-05-14 00:00:00.000
SELECT timediff('2024-06-15 14:30:00', '2024-06-15 12:00:00'); -- +0000-00-00 02:30:00.000
Because months and years vary in length, timediff may return the same text for intervals that span different numbers of days. For precise day-level differences, subtract Julian day numbers instead:
SELECT julianday('2024-06-15') - julianday('2024-01-01'); -- 166.0
All date/time functions accept time values in the following formats:
| Format | Example |
|---|---|
YYYY-MM-DD | '2024-06-15' |
YYYY-MM-DD HH:MM | '2024-06-15 14:30' |
YYYY-MM-DD HH:MM:SS | '2024-06-15 14:30:00' |
YYYY-MM-DD HH:MM:SS.SSS | '2024-06-15 14:30:00.123' |
YYYY-MM-DDTHH:MM:SS | '2024-06-15T14:30:00' |
HH:MM:SS | '14:30:00' (assumes date 2000-01-01) |
'now' | Current date and time in UTC |
DDDDDDDDDD | Julian day number as a numeric value |
The ISO 8601 T separator between date and time is accepted interchangeably with a space:
SELECT date('2024-06-15T14:30:00'); -- 2024-06-15
An optional timezone suffix [+-]HH:MM or Z may follow any format that includes a time component. The suffix Z denotes UTC (a no-op). A non-zero offset is subtracted to convert the value to UTC.
All date/time functions except timediff accept zero or more modifiers after the time value. Modifiers are applied from left to right; order matters.
Add or subtract a duration from the time value. The NNN value can be an integer or floating-point number, with an optional + or - prefix. The trailing s is optional (e.g., '+1 day' and '+1 days' are equivalent).
| Modifier | Effect |
|---|---|
NNN days | Add NNN days |
NNN hours | Add NNN hours |
NNN minutes | Add NNN minutes |
NNN seconds | Add NNN seconds |
NNN months | Add NNN months |
NNN years | Add NNN years |
SELECT date('2024-06-15', '+1 month'); -- 2024-07-15
SELECT date('2024-06-15', '+1 year'); -- 2025-06-15
SELECT datetime('2024-06-15 14:30:00', '+90 minutes'); -- 2024-06-15 16:00:00
SELECT datetime('2024-06-15 14:30:00', '+30 seconds'); -- 2024-06-15 14:30:30
Multiple modifiers chain together from left to right:
SELECT datetime('2024-06-15 14:30:00', '+1 year', '-2 months'); -- 2025-04-15 14:30:00
Shift a time value by a compound offset specified in time, date, or full datetime format. A leading + or - is required for date offsets.
| Modifier Format | Example |
|---|---|
[+-]HH:MM | '-05:00' |
[+-]HH:MM:SS | '+01:30:00' |
[+-]YYYY-MM-DD | '+0001-06-00' |
[+-]YYYY-MM-DD HH:MM:SS | '+0000-00-01 02:00:00' |
SELECT datetime('2024-06-15 14:30:00', '-05:00'); -- 2024-06-15 09:30:00
Truncate the time value backward to the beginning of a period.
| Modifier | Effect |
|---|---|
start of day | Sets time to 00:00:00 |
start of month | Sets to first day of the month at 00:00:00 |
start of year | Sets to January 1 at 00:00:00 |
SELECT datetime('2024-06-15 14:30:00', 'start of day'); -- 2024-06-15 00:00:00
SELECT date('2024-06-15', 'start of month'); -- 2024-06-01
SELECT date('2024-06-15', 'start of year'); -- 2024-01-01
Chaining start-of modifiers with arithmetic is useful for computing boundaries:
-- Last day of the current month
SELECT date('2024-06-15', 'start of month', '+1 month', '-1 day'); -- 2024-06-30
Advance the date forward to the next occurrence of weekday N, where Sunday is 0, Monday is 1, through Saturday which is 6. If the date already falls on the requested weekday, it is left unchanged.
-- 2024-06-15 is a Saturday (6); the next Sunday (0) is 2024-06-16
SELECT date('2024-06-15', 'weekday 0'); -- 2024-06-16
-- First Tuesday in October 2024
SELECT date('2024-06-15', 'start of year', '+9 months', 'weekday 2'); -- 2024-10-01
When adding months or years produces an ambiguous date (for example, one month after January 31), the ceiling and floor modifiers control how the ambiguity is resolved.
ceiling (the default) rounds forward to the next valid date.floor rounds backward to the last day of the previous month.SELECT date('2024-01-31', '+1 month', 'ceiling'); -- 2024-03-02
SELECT date('2024-01-31', '+1 month', 'floor'); -- 2024-02-29
These modifiers control how a bare numeric time value (DDDDDDDDDD) is interpreted. They must appear immediately after the time value.
| Modifier | Effect |
|---|---|
unixepoch | Interpret the number as seconds since 1970-01-01 00:00:00 UTC |
julianday | Interpret the number as a Julian day number (the default) |
auto | Automatically choose based on magnitude: small values are Julian day numbers, large values are Unix timestamps |
SELECT datetime(1718461800, 'unixepoch'); -- 2024-06-15 14:30:00
SELECT datetime(2460476.5, 'julianday'); -- 2024-06-15 00:00:00
SELECT datetime(1718461800, 'auto'); -- 2024-06-15 14:30:00
| Modifier | Effect |
|---|---|
localtime | Convert from UTC to local time |
utc | Convert from local time to UTC |
The subsec (or subsecond) modifier increases output resolution from seconds to milliseconds.
datetime or time: appends .SSS to the seconds field.unixepoch: returns a floating-point value instead of an integer.SELECT datetime('2024-06-15 14:30:00', 'subsec'); -- 2024-06-15 14:30:00.000
The strftime function accepts the following format specifiers:
| Specifier | Description | Example |
|---|---|---|
%d | Day of month: 01-31 | 15 |
%f | Fractional seconds: SS.SSS | 00.123 |
%H | Hour: 00-24 | 14 |
%j | Day of year: 001-366 | 167 |
%J | Julian day number (fractional) | 2460476.5 |
%m | Month: 01-12 | 06 |
%M | Minute: 00-59 | 30 |
%s | Seconds since 1970-01-01 | 1718461800 |
%S | Seconds: 00-59 | 45 |
%w | Day of week: 0-6 (Sunday=0) | 6 |
%W | Week of year: 00-53 | 24 |
%Y | Year: 0000-9999 | 2024 |
%% | Literal % character | % |
SELECT strftime('%f', '2024-06-15 14:30:00.123'); -- 00.123
SELECT strftime('%w', '2024-06-15'); -- 6
SELECT strftime('%W', '2024-06-15'); -- 24
SELECT strftime('%S', '2024-06-15 14:30:45'); -- 45
SELECT strftime('%m', '2024-06-15'); -- 06
Unrecognized format specifiers return NULL.
All date/time functions return NULL when given an invalid input, an out-of-range date, or an unrecognized modifier. The valid date range is 0000-01-01 00:00:00 through 9999-12-31 23:59:59.
All computations use the Proleptic Gregorian calendar. Each day is treated as exactly 86,400 seconds; leap seconds are not incorporated.
Turso provides a full set of mathematical functions for trigonometry, logarithms, exponentiation, and rounding. All math functions accept integers, floating-point numbers, or strings that look like numbers. They return an IEEE 754 double-precision floating-point result, except when the input is an integer and the result can be represented exactly as an integer.
All math functions return NULL when any argument is NULL, when an argument is a blob or non-numeric string, or when the result would be mathematically undefined (a domain error).
pi() -> real
Returns an approximation of the mathematical constant pi.
SELECT pi(); -- 3.14159265358979
All trigonometric functions work in radians. Use radians() and degrees() to convert between degrees and radians.
acos(X) -> real
Returns the arccosine of X, in radians. X must be between -1.0 and 1.0; returns NULL for values outside that range.
SELECT acos(0.5); -- 1.0471975511966
asin(X) -> real
Returns the arcsine of X, in radians. X must be between -1.0 and 1.0; returns NULL for values outside that range.
SELECT asin(0.5); -- 0.523598775598299
atan(X) -> real
Returns the arctangent of X, in radians.
SELECT atan(1.0); -- 0.785398163397448
atan2(Y, X) -> real
Returns the arctangent of Y/X, in radians, using the signs of both arguments to determine the quadrant of the result. Unlike atan(Y/X), atan2 correctly handles cases where X is zero.
SELECT atan2(1.0, 1.0); -- 0.785398163397448
cos(X) -> real
Returns the cosine of X, where X is in radians.
SELECT cos(0.0); -- 1.0
sin(X) -> real
Returns the sine of X, where X is in radians.
SELECT sin(pi() / 6); -- 0.5
tan(X) -> real
Returns the tangent of X, where X is in radians.
SELECT tan(pi() / 4); -- 1.0
acosh(X) -> real
Returns the inverse hyperbolic cosine of X. X must be greater than or equal to 1.0; returns NULL for values less than 1.0.
SELECT acosh(2.0); -- 1.31695789692482
asinh(X) -> real
Returns the inverse hyperbolic sine of X.
SELECT asinh(1.0); -- 0.881373587019543
atanh(X) -> real
Returns the inverse hyperbolic tangent of X. X must be between -1.0 and 1.0 (exclusive); returns NULL for values outside that range.
SELECT atanh(0.5); -- 0.549306144334055
cosh(X) -> real
Returns the hyperbolic cosine of X.
SELECT cosh(1.0); -- 1.54308063481524
sinh(X) -> real
Returns the hyperbolic sine of X.
SELECT sinh(1.0); -- 1.1752011936438
tanh(X) -> real
Returns the hyperbolic tangent of X.
SELECT tanh(1.0); -- 0.761594155955765
degrees(X) -> real
Converts X from radians to degrees.
SELECT degrees(pi()); -- 180.0
radians(X) -> real
Converts X from degrees to radians.
SELECT radians(180.0); -- 3.14159265358979
exp(X) -> real
Returns the value of e (Euler’s number, approximately 2.71828) raised to the power X.
SELECT exp(1.0); -- 2.71828182845905
ln(X) -> real
Returns the natural logarithm (base e) of X. Returns NULL if X is less than or equal to zero.
SELECT ln(exp(1.0)); -- 1.0
log(X) -> real
With a single argument, returns the base-10 logarithm of X. Returns NULL if X is less than or equal to zero.
SELECT log(100.0); -- 2.0
log(B, X) -> real
With two arguments, returns the base-B logarithm of X. Returns NULL if either argument is less than or equal to zero.
SELECT log(2, 8); -- 3.0
Note: the single-argument form of log() computes base-10, not the natural logarithm. Use ln() for the natural logarithm.
log10(X) -> real
Returns the base-10 logarithm of X. Returns NULL if X is less than or equal to zero. Equivalent to log(X) with a single argument.
SELECT log10(1000.0); -- 3.0
log2(X) -> real
Returns the base-2 logarithm of X. Returns NULL if X is less than or equal to zero.
SELECT log2(64); -- 6.0
pow(X, Y) -> real****power(X, Y) -> real
Returns X raised to the power Y. pow and power are aliases for the same function.
SELECT pow(2, 10); -- 1024.0
SELECT power(3, 4); -- 81.0
sqrt(X) -> real
Returns the square root of X. Returns NULL if X is negative.
SELECT sqrt(144); -- 12.0
ceil(X) -> integer/real****ceiling(X) -> integer/real
Returns the smallest integer not less than X (rounds toward positive infinity). ceil and ceiling are aliases. When the input is an integer, the result is that same integer. When the input is a float, the result is a float with an integer value.
SELECT ceil(3.2); -- 4.0
SELECT ceiling(-2.8); -- -2.0
SELECT ceil(5); -- 5
floor(X) -> integer/real
Returns the largest integer not greater than X (rounds toward negative infinity). When the input is an integer, the result is that same integer. When the input is a float, the result is a float with an integer value.
SELECT floor(3.8); -- 3.0
SELECT floor(-2.3); -- -3.0
SELECT floor(5); -- 5
trunc(X) -> integer/real
Returns the integer part of X by removing any fractional digits (rounds toward zero). When the input is an integer, the result is that same integer. When the input is a float, the result is a float with an integer value.
SELECT trunc(3.7); -- 3.0
SELECT trunc(-3.7); -- -3.0
SELECT trunc(5); -- 5
The difference between these three rounding functions is visible with negative values:
| Input | ceil | floor | trunc |
|---|---|---|---|
| 3.7 | 4.0 | 3.0 | 3.0 |
| -3.7 | -3.0 | -4.0 | -3.0 |
ceil rounds toward positive infinity, floor rounds toward negative infinity, and trunc rounds toward zero.
mod(X, Y) -> real
Returns the remainder after dividing X by Y. Unlike the % operator, mod() works correctly with floating-point arguments. Returns NULL if Y is zero.
SELECT mod(10, 3); -- 1.0
SELECT mod(10.5, 3.0); -- 1.5
All math functions follow these rules:
sqrt(-1), acos(2.0), ln(-1), and log(-5).SELECT sqrt(-1); -- NULL
SELECT acos(2.0); -- NULL
SELECT acos(NULL); -- NULL
SELECT ln(-1); -- NULL
Turso includes a comprehensive set of JSON functions for creating, querying, modifying, and aggregating JSON data. These functions accept both JSON text and JSONB (binary JSON) as input.
Many JSON functions take a path argument that identifies a specific element within a JSON structure. A well-formed path begins with $ (the root element) followed by zero or more accessors:
| Accessor | Meaning |
|---|---|
$.key | Object member by name |
$[N] | Array element at index N (0-based) |
$[#-N] | Array element N from the end |
$[#] | Position after last array element (for appending) |
Examples: $ (root), $.name (object field), $[0] (first array element), $.items[2].price (nested access), $[#-1] (last array element).
When a function parameter is labeled “value,” plain text arguments become quoted JSON strings in the result. To embed actual JSON (not a string), wrap the value with json() or another JSON function:
SELECT json_object('data', '[1,2,3]'); -- {"data":"[1,2,3]"}
SELECT json_object('data', json('[1,2,3]')); -- {"data":[1,2,3]}
json(json) -> text
Validates and minifies a JSON string. Accepts JSON text, JSONB blobs, and JSON5 input. Returns canonical RFC-8259 JSON with whitespace removed. Raises an error if the input is malformed.
SELECT json(' { "name" : "Alice", "scores": [90, 85] } ');
-- {"name":"Alice","scores":[90,85]}
JSON5 extensions such as unquoted keys, trailing commas, and comments are accepted on input but normalized to standard JSON on output:
SELECT json('{name: "Alice", age: 25}');
-- {"name":"Alice","age":25}
jsonb(json) -> blob
Returns the JSONB (binary) representation of the input. JSONB is an opaque binary format that can be faster to process than text JSON. All JSON functions accept JSONB as input.
SELECT json(jsonb('{"name":"John","age":30}'));
-- {"name":"John","age":30}
json_array(value1, value2, …) -> text
Creates a JSON array from the given arguments. Text arguments become quoted JSON strings unless they come from another JSON function.
SELECT json_array(1, 2, '3', 4); -- [1,2,"3",4]
SELECT json_array(json_array(1, 2), 'text'); -- [[1,2],"text"]
SELECT json_array(1, null, json('[4,5]')); -- [1,null,[4,5]]
jsonb_array(...) returns the same result in JSONB format.
json_object(label1, value1, …) -> text
Creates a JSON object from label/value pairs. Labels must be strings. Text values become quoted JSON strings unless produced by another JSON function.
SELECT json_object('name', 'Alice', 'age', 25);
-- {"name":"Alice","age":25}
SELECT json_object('user', json_object('id', 1, 'role', 'admin'));
-- {"user":{"id":1,"role":"admin"}}
SELECT json_object();
-- {}
jsonb_object(...) returns the same result in JSONB format.
json_quote(value) -> text
Converts an SQL value into its JSON representation. Strings are quoted and interior quotes are escaped. NULL becomes the JSON literal null. If the value is already JSON (from another JSON function), it is returned unchanged.
SELECT json_quote('hello'); -- "hello"
SELECT json_quote(3.14159); -- 3.14159
SELECT json_quote(12345); -- 12345
SELECT json_quote(null); -- null
json_extract(json, path, …) -> value
Extracts one or more values from a JSON document at the given path(s).
With a single path, the return type depends on the JSON element: SQL NULL for JSON null, INTEGER for integers and booleans, REAL for floating-point numbers, TEXT for strings, and a text JSON representation for objects and arrays.
With multiple paths, returns a JSON array containing all extracted values.
SELECT json_extract('{"a":2,"c":[4,5,{"f":7}]}', '$.c[2].f');
-- 7
SELECT json_extract('{"a":[1,2,3]}', '$.a');
-- [1,2,3]
SELECT json_extract('{"a":[1,2,3]}', '$.a', '$.a[0]', '$.a[1]', '$.a[3]');
-- [[1,2,3],1,2,null]
Returns NULL if the input JSON is NULL. Raises an error if the JSON is malformed.
jsonb_extract(...) returns JSONB for objects and arrays instead of text JSON.
json -> path -> text
Extracts a value and always returns it as a JSON text representation. For strings, the result includes the surrounding quotes.
SELECT '{"a":2,"c":[4,5]}' -> '$.a'; -- 2
SELECT '{"a":"xyz"}' -> '$.a'; -- "xyz"
SELECT '[1,2,3]' -> 1; -- 2
The right operand can be a full path ('$.field'), a plain text label (interpreted as '$.label'), or an integer array index. Negative integers count from the end.
SELECT '{"a":1}' -> 'a'; -- 1
SELECT '[1,2,3]' -> -1; -- 3
The -> operator can be chained:
SELECT '{"a":2,"c":[4,5,{"f":7}]}' -> 'c' -> 2 ->> 'f';
-- 7
json ->> path -> value
Extracts a value and returns it as an SQL type: TEXT for strings (without quotes), INTEGER for integers and booleans, REAL for floats, NULL for JSON null.
SELECT '{"a":"xyz"}' ->> '$.a'; -- xyz (text, no quotes)
SELECT '{"a":2}' ->> '$.a'; -- 2 (integer)
SELECT 'true' ->> '$'; -- 1 (integer)
SELECT 'false' ->> '$'; -- 0 (integer)
json_array_length(json [, path]) -> integer
Returns the number of elements in the JSON array. If the value at the specified path is not an array, returns 0. If the path does not exist, returns NULL.
SELECT json_array_length('[1,2,3,4]'); -- 4
SELECT json_array_length('[]'); -- 0
SELECT json_array_length('{"one":[1,2,3]}', '$.one'); -- 3
SELECT json_array_length('{"one":[1,2,3]}'); -- 0
SELECT json_array_length('{"one":[1,2,3]}', '$.two'); -- (NULL)
json_type(json [, path]) -> text
Returns a string indicating the type of the JSON element. Possible return values: 'object', 'array', 'integer', 'real', 'text', 'true', 'false', 'null'. Returns NULL if the path does not exist.
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}'); -- object
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a'); -- array
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[0]'); -- integer
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[1]'); -- real
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[2]'); -- true
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[3]'); -- false
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[4]'); -- null
SELECT json_type('{"a":[2,3.5,true,false,null,"x"]}', '$.a[5]'); -- text
json_valid(json [, flags]) -> integer
Returns 1 if the input is well-formed JSON, 0 if it is malformed, or NULL if the input is NULL.
Without flags (or with flags = 1), validates against RFC-8259 canonical JSON. The optional flags parameter is a bitmask that controls which formats are considered valid:
| Flag | Meaning |
|---|---|
| 1 | RFC-8259 canonical JSON text |
| 2 | JSON5 text |
| 4 | JSONB (fast superficial check) |
| 8 | JSONB (strict thorough check) |
Flags can be combined with bitwise OR. Common combinations:
| Value | Accepts |
|---|---|
| 1 | Standard JSON text only (default) |
| 2 | JSON5 text only |
| 5 | Standard JSON text or JSONB |
| 6 | JSON5 text or JSONB |
SELECT json_valid('{"a":55,"b":72}'); -- 1
SELECT json_valid('not a valid json'); -- 0
SELECT json_valid(NULL); -- (NULL)
SELECT json_valid(123); -- 1
json_error_position(json) -> integer
Returns 0 if the input is well-formed JSON or JSON5. If malformed, returns the 1-based character position of the first syntax error. Returns NULL if the input is NULL.
SELECT json_error_position('{"a":55,"b":72}'); -- 0
SELECT json_error_position('{"a":55,"b":72,,}'); -- 16
SELECT json_error_position(NULL); -- (NULL)
json_pretty(json [, indent]) -> text
Returns the input JSON formatted with whitespace for readability. The optional indent argument specifies the indentation string (default: four spaces). If indent is NULL, the default four spaces are used.
SELECT json_pretty('{"name":"Alice","age":25}');
-- {
-- "name": "Alice",
-- "age": 25
-- }
json_set(json, path, value, …) -> text
Creates or replaces values at the given paths. If the path exists, the value is replaced. If it does not exist, it is created (including intermediate objects and arrays as needed). Accepts multiple path/value pairs.
SELECT json_set('{"a":2,"c":4}', '$.a', 99);
-- {"a":99,"c":4}
SELECT json_set('{"a":2,"c":4}', '$.e', 99);
-- {"a":2,"c":4,"e":99}
SELECT json_set('{}', '$.field', 'value');
-- {"field":"value"}
SELECT json_set('[123]', '$[0]', 456, '$[1]', 789);
-- [456,789]
To set a JSON value (not a string), wrap it with json():
SELECT json_set('{"a":2,"c":4}', '$.c', json('[97,96]'));
-- {"a":2,"c":[97,96]}
json_set can create deeply nested structures:
SELECT json_set('{}', '$.object.doesnt.exist', 'value');
-- {"object":{"doesnt":{"exist":"value"}}}
jsonb_set(...) returns the same result in JSONB format.
json_insert(json, path, value, …) -> text
Inserts values only where the path does not already exist. Existing values are not modified.
SELECT json_insert('{"a":1}', '$.a', 2); -- {"a":1} (no change)
SELECT json_insert('{"a":1}', '$.b', 2); -- {"a":1,"b":2}
SELECT json_insert('[1,2,3]', '$[3]', 4); -- [1,2,3,4]
SELECT json_insert('{"a":1}', '$.b', 2, '$.c', 3);
-- {"a":1,"b":2,"c":3}
jsonb_insert(...) returns the same result in JSONB format.
json_replace(json, path, value, …) -> text
Replaces values only where the path already exists. Does not create new paths.
SELECT json_replace('{"a":1,"b":2}', '$.a', 42); -- {"a":42,"b":2}
SELECT json_replace('{"a":1,"b":2}', '$.c', 3); -- {"a":1,"b":2} (no change)
SELECT json_replace('[1,2,3,4]', '$[1]', 99); -- [1,99,3,4]
jsonb_replace(...) returns the same result in JSONB format.
json_remove(json, path, …) -> text
Returns a copy of the JSON with elements at the specified paths removed. Paths that do not exist are silently ignored. Removing the root element ($) returns NULL.
SELECT json_remove('{"a":1,"b":2,"c":3}', '$.b'); -- {"a":1,"c":3}
SELECT json_remove('[1,2,3,4]', '$[1]'); -- [1,3,4]
SELECT json_remove('{"a":1,"b":2}', '$.a', '$.b'); -- {}
SELECT json_remove('{"a":1}', '$'); -- (NULL)
When removing multiple array elements, removals are applied sequentially, and each removal shifts subsequent indices:
SELECT json_remove('[0,1,2,3,4]', '$[2]', '$[0]');
-- [1,3,4]
jsonb_remove(...) returns the same result in JSONB format.
json_patch(json1, json2) -> text
Applies the RFC-7396 MergePatch algorithm. Object members in json2 are merged into json1: new keys are added, existing keys are replaced, and keys with a null value are deleted. Arrays are treated as atomic values and replaced entirely.
SELECT json_patch('{"a":1,"b":2}', '{"c":3,"d":4}');
-- {"a":1,"b":2,"c":3,"d":4}
SELECT json_patch('{"a":1,"b":2}', '{"b":3}');
-- {"a":1,"b":3}
SELECT json_patch('{"a":1,"b":2}', '{"a":null}');
-- {"b":2}
SELECT json_patch('{"user":{"name":"john"}}', '{"user":{"age":30}}');
-- {"user":{"name":"john","age":30}}
SELECT json_patch('{"arr":[1,2,3]}', '{"arr":[4,5,6]}');
-- {"arr":[4,5,6]}
jsonb_patch(...) returns the same result in JSONB format.
json_group_array(value) -> text
An aggregate function that collects all values from the group into a JSON array.
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
INSERT INTO products VALUES (1, 'hat', 9.99), (2, 'shirt', 24.99), (3, 'coat', 79.99);
SELECT json_group_array(name) FROM products;
-- ["hat","shirt","coat"]
Can be combined with other JSON functions to build complex aggregated structures:
SELECT json_group_array(json_object('name', name, 'price', price)) FROM products;
-- [{"name":"hat","price":9.99},{"name":"shirt","price":24.99},{"name":"coat","price":79.99}]
jsonb_group_array(value) returns the same result in JSONB format.
json_group_object(label, value) -> text
An aggregate function that collects label/value pairs from the group into a JSON object.
SELECT json_group_object(name, price) FROM products;
-- {"hat":9.99,"shirt":24.99,"coat":79.99}
jsonb_group_object(label, value) returns the same result in JSONB format.
json_each(json [, path])
A table-valued function that returns one row for each immediate child of a JSON array or object. It does not recurse into nested structures. With an optional path argument, iteration starts at the element identified by that path.
The returned columns are:
| Column | Type | Description |
|---|---|---|
key | any | Array index (integer) or object key (text) |
value | any | SQL value for primitives; JSON text for objects/arrays |
type | text | One of: object, array, integer, real, text, true, false, null |
atom | any | SQL value for primitives; NULL for objects/arrays |
id | integer | Unique identifier for this element |
parent | integer | Always NULL (only populated by json_tree) |
fullkey | text | Full JSON path to this element |
path | text | JSON path to the containing element |
SELECT key, value, type FROM json_each('[1, 2.5, "x", true, false, null]');
-- 0|1|integer
-- 1|2.5|real
-- 2|x|text
-- 3|1|true
-- 4|0|false
-- 5||null
SELECT key, value, type FROM json_each('{"name":"Alice","age":30}');
-- name|Alice|text
-- age|30|integer
With a path argument:
SELECT key, value FROM json_each('{"a":[10,20,30]}', '$.a');
-- 0|10
-- 1|20
-- 2|30
A common use case is joining json_each against a table column to search within JSON arrays:
CREATE TABLE contacts (id INTEGER PRIMARY KEY, name TEXT, phones TEXT);
INSERT INTO contacts VALUES (1, 'Alice', '["555-0100","555-0101"]');
INSERT INTO contacts VALUES (2, 'Bob', '["555-0200"]');
SELECT DISTINCT contacts.name
FROM contacts, json_each(contacts.phones)
WHERE json_each.value LIKE '555-01%';
-- Alice
json_tree(json [, path])
A table-valued function that recursively walks the entire JSON structure, returning one row for every element at every level of nesting. It returns the same columns as json_each, but the parent column is populated with the id of each element’s parent.
SELECT key, type, fullkey FROM json_tree('{"a":1,"b":{"c":2},"d":[3,4]}') ORDER BY id;
-- |object|$
-- a|integer|$.a
-- b|object|$.b
-- c|integer|$.b.c
-- d|array|$.d
-- 0|integer|$.d[0]
-- 1|integer|$.d[1]
The json_tree function is partially supported. Basic recursive traversal works, but some edge cases involving negative array indices in the path argument have known limitations. See the SQLite documentation for full behavioral details.
All other JSON functions listed on this page are fully compatible with SQLite.
PRAGMAs are special SQL statements that configure the database engine or query internal state. Unlike regular SQL statements, PRAGMAs are specific to the database implementation and do not follow the SQL standard.
PRAGMA pragma_name;
PRAGMA pragma_name = value;
PRAGMA pragma_name(value);
The first form queries the current value. The second and third forms set a new value. Not all PRAGMAs support both forms.
PRAGMA application_id;** PRAGMA application_id = integer;**
Queries or sets the 32-bit signed integer “Application ID” stored at offset 68 in the database header. Applications that use Turso as a file format can set a unique application ID so that external tools can identify the file type. The default value is 0.
PRAGMA application_id;
-- 0
PRAGMA application_id = 12345;
PRAGMA application_id;
-- 12345
PRAGMA database_list;
Returns one row per attached database, with columns seq (sequence number), name (database name), and file (file path). The main database always has sequence number 0 and the name main.
PRAGMA database_list;
-- seq|name|file
-- 0|main|
PRAGMA encoding;
Returns the text encoding used by the database. Turso uses UTF-8.
PRAGMA encoding;
-- UTF-8
PRAGMA freelist_count;
Returns the number of unused pages in the database file. A high freelist count may indicate the database would benefit from VACUUM.
PRAGMA freelist_count;
-- 0
PRAGMA page_count;
Returns the total number of pages in the database file. Multiply by the page size to get the approximate database size in bytes.
PRAGMA page_count;
-- 0
PRAGMA page_size;** PRAGMA page_size = bytes;**
Queries or sets the database page size in bytes. The value must be a power of two between 512 and 65536. The default is 4096.
Setting a new page size takes effect when the database is first created or after a VACUUM operation.
PRAGMA page_size;
-- 4096
PRAGMA schema_version;
Returns the schema version number, an integer that Turso increments automatically whenever the database schema changes (via CREATE TABLE, DROP TABLE, etc.). The default for a new database is 0.
Writing to schema_version is accepted syntactically but treated as a no-op for safety. Modifying the schema version externally can cause prepared statements to use stale schemas and corrupt data.
PRAGMA schema_version;
-- 0
PRAGMA user_version;** PRAGMA user_version = integer;**
Queries or sets a user-defined 32-bit integer stored in the database header at offset 60. Turso does not use this value internally – it is available for applications to track their own schema migration version or any other purpose. The default is 0.
PRAGMA user_version;
-- 0
PRAGMA user_version = 100;
PRAGMA user_version;
-- 100
PRAGMA pragma_list;
Returns a list of all PRAGMA names recognized by the current database connection.
PRAGMA pragma_list;
-- application_id
-- busy_timeout
-- cache_size
-- ...
PRAGMA table_info(table-name);
Returns one row per regular column in the named table. Each row contains:
| Column | Description |
|---|---|
cid | Column index (zero-based) |
name | Column name |
type | Declared type (empty string if none) |
notnull | 1 if the column has a NOT NULL constraint, 0 otherwise |
dflt_value | Default value expression, or empty if none |
pk | 0 if the column is not part of the primary key; otherwise the 1-based index within the primary key |
Generated columns and hidden columns are omitted. Use table_xinfo for a complete listing.
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL DEFAULT 0.0,
in_stock INTEGER DEFAULT 1
);
PRAGMA table_info(products);
-- cid|name|type|notnull|dflt_value|pk
-- 0|id|INTEGER|0||1
-- 1|name|TEXT|1||0
-- 2|price|REAL|0|0.0|0
-- 3|in_stock|INTEGER|0|1|0
PRAGMA table_xinfo(table-name);
Returns the same columns as table_info plus an additional hidden column. This PRAGMA includes all columns – regular, generated, and hidden.
hidden Value | Meaning |
|---|---|
0 | Normal column |
1 | Hidden column in a virtual table |
2 | Virtual (dynamic) generated column |
3 | Stored generated column |
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL DEFAULT 0.0,
in_stock INTEGER DEFAULT 1
);
PRAGMA table_xinfo(products);
-- cid|name|type|notnull|dflt_value|pk|hidden
-- 0|id|INTEGER|0||1|0
-- 1|name|TEXT|1||0|0
-- 2|price|REAL|0|0.0|0|0
-- 3|in_stock|INTEGER|0|1|0|0
PRAGMA busy_timeout;** PRAGMA busy_timeout = milliseconds;**
Queries or sets the busy timeout in milliseconds. When another connection holds a lock, Turso will retry for up to this many milliseconds before returning an error. The default is 0 (return immediately on lock contention).
PRAGMA busy_timeout;
-- 0
PRAGMA busy_timeout = 5000;
PRAGMA busy_timeout;
-- 5000
PRAGMA cache_size;** PRAGMA cache_size = pages;**
Queries or sets the suggested maximum number of database pages held in memory. A positive value specifies pages directly. A negative value specifies the cache size in kibibytes (KiB). The default is -2000 (approximately 2 MB).
This setting applies only to the current session and reverts to the default when the connection closes.
PRAGMA cache_size;
-- -2000
-- Set cache to 4000 pages
PRAGMA cache_size = 4000;
PRAGMA cache_size;
-- 4000
-- Set cache to approximately 4 MB
PRAGMA cache_size = -4000;
PRAGMA cache_size;
-- -4000
PRAGMA cache_spill;** PRAGMA cache_spill = boolean;**
Queries or sets whether the pager is allowed to spill dirty pages to the database file in the middle of a transaction. When enabled (the default, 1), the pager may write dirty pages to disk before a transaction commits. When disabled (0), dirty pages are held in memory until commit.
Disabling cache spill avoids acquiring an exclusive lock on the database file until the transaction commits, which can be useful for long-running transactions.
PRAGMA cache_spill;
-- 1
PRAGMA cache_spill = 0;
PRAGMA cache_spill;
-- 0
PRAGMA max_page_count;** PRAGMA max_page_count = N;**
Queries or sets the maximum number of pages allowed in the database file. Both the query and set forms return the current maximum. The maximum cannot be reduced below the current database size. The default is 4294967294.
PRAGMA max_page_count;
-- 4294967294
PRAGMA max_page_count = 1000;
PRAGMA max_page_count;
-- 1000
PRAGMA query_only;** PRAGMA query_only = boolean;**
When set to 1, all write operations (CREATE, INSERT, UPDATE, DELETE, DROP) are rejected with an error. The default is 0 (writes allowed).
PRAGMA query_only = 1;
CREATE TABLE test (id INTEGER);
-- Error: Cannot execute write statement in query_only mode
PRAGMA query_only = 0;
PRAGMA temp_store;** PRAGMA temp_store = {0 | 1 | 2};**
Queries or sets the storage location for temporary tables and indices.
| Value | Name | Behavior |
|---|---|---|
0 | DEFAULT | Use the compile-time default |
1 | FILE | Store temporary data in a file |
2 | MEMORY | Store temporary data in memory |
Changing this setting deletes all existing temporary tables, indices, triggers, and views. The default is 0.
PRAGMA temp_store;
-- 0
PRAGMA temp_store = 2;
PRAGMA temp_store;
-- 2
PRAGMA journal_mode;** PRAGMA journal_mode = mode;**
Queries or sets the journal mode for the database. The journal mode controls how transactions are logged for crash recovery.
| Mode | Description |
|---|---|
wal | Write-ahead logging. Allows concurrent readers and a single writer. This is the default in Turso. |
Turso defaults to WAL mode. Setting the journal mode returns the active mode.
PRAGMA journal_mode;
-- wal
PRAGMA journal_mode = 'wal';
-- wal
PRAGMA synchronous;** PRAGMA synchronous = {0 | 2};**
Queries or sets the synchronous flag, which controls how aggressively Turso forces writes to disk. The value is returned as an integer.
| Value | Name | Behavior |
|---|---|---|
0 | OFF | No synchronous writes. Fastest, but the database may corrupt if the operating system crashes or power is lost. |
2 | FULL | Synchronous writes at every critical moment. Safe against both application crashes and OS/power failures. This is the default. |
PRAGMA synchronous;
-- 2
PRAGMA synchronous = OFF;
PRAGMA synchronous;
-- 0
PRAGMA synchronous = FULL;
PRAGMA synchronous;
-- 2
PRAGMA wal_checkpoint;
Runs a checkpoint operation on the write-ahead log (WAL). A checkpoint transfers data from the WAL file back into the main database file. Returns three columns:
| Column | Description |
|---|---|
busy | 0 if the checkpoint completed, 1 if it was blocked by a concurrent reader or writer |
log | Total number of frames in the WAL |
checkpointed | Number of frames successfully checkpointed |
PRAGMA wal_checkpoint;
-- busy|log|checkpointed
-- 0|0|0
PRAGMA foreign_keys;** PRAGMA foreign_keys = boolean;**
Queries or sets whether foreign key constraints are enforced. When enabled (1), INSERT, UPDATE, and DELETE operations that violate a foreign key constraint will fail with an error. The default is 0 (foreign keys not enforced).
This setting cannot be changed while a transaction is active.
PRAGMA foreign_keys;
-- 0
PRAGMA foreign_keys = 1;
CREATE TABLE orders (id INTEGER PRIMARY KEY, total REAL);
CREATE TABLE items (
id INTEGER PRIMARY KEY,
order_id INTEGER REFERENCES orders(id)
);
-- This fails because order 999 does not exist
INSERT INTO items VALUES (1, 999);
-- Error: foreign key constraint failed
PRAGMA ignore_check_constraints;** PRAGMA ignore_check_constraints = boolean;**
Queries or sets whether CHECK constraints are enforced. When set to 1, CHECK constraint violations are silently ignored during INSERT and UPDATE operations. The default is 0 (CHECK constraints enforced).
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
customer TEXT NOT NULL,
total REAL CHECK(total >= 0)
);
-- With CHECK constraints enforced (default)
INSERT INTO orders VALUES (1, 'Alice', -5.0);
-- Error: CHECK constraint failed: total >= 0
-- Disable CHECK constraints
PRAGMA ignore_check_constraints = 1;
INSERT INTO orders VALUES (1, 'Alice', -5.0);
-- Succeeds despite negative total
PRAGMA integrity_check;
Performs a thorough consistency check of the entire database. Examines every page, verifies B-tree structure, checks for missing or duplicate index entries, validates UNIQUE and NOT NULL constraints, and confirms freelist integrity. Returns ok if no problems are found, or one or more rows describing any issues detected.
PRAGMA integrity_check;
-- ok
PRAGMA quick_check;
Performs a faster but less comprehensive consistency check than integrity_check. It verifies B-tree structure and record formatting but does not validate UNIQUE constraints or cross-check index content against table content. Returns ok if no problems are found.
PRAGMA quick_check;
-- ok
PRAGMA legacy_file_format;
This PRAGMA is a no-op. It exists for compatibility but does not return a value or perform any action.
Turso supports the PRAGMAs listed on this page. The following differences from SQLite are worth noting:
wal mode. Other journal modes (delete, truncate, persist, memory, off) are not supported.OFF (0) and FULL (2) are supported. NORMAL (1) and EXTRA (3) are not available.PRAGMA schema.cache_spill = N) is not available.PASSIVE, FULL, RESTART, TRUNCATE) are not available.Turso Extension. This feature is not available in SQLite. Custom types require STRICT tables for encode/decode behavior.
Turso supports user-defined custom types that extend the type system of STRICT tables. A custom type defines how values are transformed when written to and read from storage using ENCODE and DECODE expressions, allowing you to enforce domain constraints, normalize data, or change the storage representation without modifying application queries.
Custom types are built on top of one of the four base storage types (TEXT, INTEGER, REAL, BLOB). When a value is inserted into a column with a custom type, the ENCODE expression transforms it before writing. When the value is read back, the DECODE expression reverses the transformation. This is transparent to queries: SELECT statements return decoded values, and WHERE clauses operate on decoded values.
Type definitions are stored in the sqlite_turso_types system table and are loaded into an in-memory registry when the database is opened. Types persist across sessions.
CREATE TYPE [IF NOT EXISTS] type_name [(param [, ...])]
BASE {TEXT | INTEGER | REAL | BLOB}
[ENCODE expr]
[DECODE expr]
[DEFAULT expr]
[OPERATOR 'op' (right_type) -> func_name]*;
DROP TYPE [IF EXISTS] type_name;
CREATE TYPE registers a new custom type in the database. The type definition specifies:
TEXT, INTEGER, REAL, or BLOB.value is a placeholder that refers to the value being inserted. If omitted, values are stored as-is.value is a placeholder that refers to the raw stored value. If omitted, values are returned as-is.DROP TYPE removes a custom type definition from the database. If the type does not exist and IF EXISTS is not specified, an error is raised.
The BASE clause is required and specifies the underlying storage type.
| Base Type | Description |
|---|---|
TEXT | Stored as a UTF-8 text string |
INTEGER | Stored as a signed integer (up to 8 bytes) |
REAL | Stored as an 8-byte IEEE 754 floating-point number |
BLOB | Stored as raw binary data |
The base type determines how the encoded value is stored on disk and what type affinity rules apply during storage.
The ENCODE clause defines an expression that transforms values on write. The special identifier value represents the input being inserted. The expression can be any valid SQL expression, including function calls, arithmetic, CASE expressions, and nested function calls.
If the ENCODE expression raises an error (for example, json(value) on invalid JSON), the INSERT statement fails. This makes ENCODE a natural place to add validation logic.
The DECODE clause defines an expression that transforms values on read. The special identifier value represents the raw stored value. The DECODE expression is applied whenever a column with the custom type is selected.
The DEFAULT clause specifies a fallback value for columns of this type when no value is provided during INSERT. If a column also has its own DEFAULT clause in the CREATE TABLE statement, the column-level default takes priority.
The OPERATOR clause maps an operator symbol to a named function. The syntax is:
OPERATOR 'op' (right_type) -> func_name
Where op is the operator symbol (such as +, -, <, =), right_type is the type of the right-hand operand, and func_name is the function to call when this operator is used between values of the custom type.
CREATE TYPE IF NOT EXISTS silently succeeds if a type with the same name already exists. DROP TYPE IF EXISTS silently succeeds if the type does not exist.
Custom types can accept parameters that are substituted into the ENCODE and DECODE expressions. Parameters are declared in parentheses after the type name and referenced by name in the expressions.
CREATE TYPE type_name(param1, param2)
BASE base_type
ENCODE expr_using_param1_and_param2
DECODE expr_using_param1_and_param2;
When the type is used in a CREATE TABLE statement, the actual parameter values are provided:
CREATE TABLE t (col type_name(100, 2)) STRICT;
The parameter values are substituted into the ENCODE and DECODE expressions at compile time.
Custom types are used as column type names in CREATE TABLE ... STRICT statements. The type name replaces a standard type like TEXT or INTEGER:
CREATE TYPE cents BASE integer ENCODE value * 100 DECODE value / 100;
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price cents
) STRICT;
Custom types require STRICT tables. On non-STRICT tables, the ENCODE transformation is not applied during insertion, which leads to incorrect values when DECODE runs on read.
The CAST expression can target a custom type. When you write CAST(expr AS type_name), the ENCODE and DECODE expressions are both applied (a round-trip), producing the normalized user-facing form of the value:
CREATE TYPE normalized BASE text ENCODE lower(value) DECODE value;
SELECT CAST('Hello World' AS normalized);
-- Returns 'hello world'
For parametric types, the parameters must be provided:
SELECT CAST(42 AS numeric(10,2));
-- Returns '42.00'
This is useful for normalizing values or validating that a value conforms to a custom type's constraints outside of an INSERT context.
Custom type definitions are stored in the sqlite_turso_types system table. You can query them directly:
SELECT name, sql FROM sqlite_turso_types;
NULL values pass through ENCODE and DECODE unchanged. If you insert NULL into a column with a custom type, NULL is stored and NULL is returned on read, regardless of the ENCODE and DECODE expressions.
-- Define a type that stores monetary values as cents
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100;
-- Use it in a STRICT table
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
price cents
) STRICT;
INSERT INTO products VALUES (1, 'Coffee', 4);
INSERT INTO products VALUES (2, 'Tea', 2);
INSERT INTO products VALUES (3, 'Juice', 3);
-- Values are decoded on read
SELECT id, name, price FROM products;
-- 1|Coffee|4
-- 2|Tea|2
-- 3|Juice|3
-- Define a type that validates JSON on insert
CREATE TYPE validated_json BASE text
ENCODE json(value)
DECODE value;
CREATE TABLE config (
id INTEGER PRIMARY KEY,
data validated_json
) STRICT;
-- Valid JSON is accepted and normalized
INSERT INTO config VALUES (1, '{"key": "val"}');
SELECT id, data FROM config;
-- 1|{"key":"val"}
-- Invalid JSON is rejected at insert time
INSERT INTO config VALUES (2, 'not valid json');
-- Error: malformed JSON
-- Define a type that lowercases text on insert
CREATE TYPE normalized BASE text
ENCODE lower(value)
DECODE value;
CREATE TABLE tags (label normalized) STRICT;
INSERT INTO tags VALUES ('JavaScript');
INSERT INTO tags VALUES ('PYTHON');
INSERT INTO tags VALUES ('Rust');
SELECT label FROM tags ORDER BY label;
-- javascript
-- python
-- rust
-- Define a type with a default value
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100
DEFAULT 0;
CREATE TABLE products (
id INTEGER PRIMARY KEY,
price cents
) STRICT;
-- Omitting price uses the type-level default of 0
INSERT INTO products(id) VALUES (1);
INSERT INTO products VALUES (2, 5);
SELECT id, price FROM products;
-- 1|0
-- 2|5
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100
DEFAULT 0;
-- Column-level DEFAULT takes priority over type-level DEFAULT
CREATE TABLE products (
id INTEGER PRIMARY KEY,
price cents DEFAULT 10
) STRICT;
INSERT INTO products(id) VALUES (1);
SELECT id, price FROM products;
-- 1|10
-- Define a clamping type with configurable bounds
CREATE TYPE clamp(lo, hi) BASE integer
ENCODE CASE
WHEN value < lo THEN lo
WHEN value > hi THEN hi
ELSE value
END
DECODE value;
CREATE TABLE readings (
id INTEGER PRIMARY KEY,
temperature clamp(0, 100)
) STRICT;
INSERT INTO readings VALUES (1, 50);
INSERT INTO readings VALUES (2, 150);
INSERT INTO readings VALUES (3, -20);
SELECT id, temperature FROM readings;
-- 1|50
-- 2|100
-- 3|0
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100;
CREATE TYPE normalized BASE text
ENCODE lower(value)
DECODE value;
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name normalized,
price cents
) STRICT;
INSERT INTO products VALUES (1, 'Coffee', 4);
INSERT INTO products VALUES (2, 'TEA', 2);
SELECT id, name, price FROM products;
-- 1|coffee|4
-- 2|tea|2
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100;
CREATE TABLE products (
id INTEGER PRIMARY KEY,
price cents
) STRICT;
INSERT INTO products VALUES (1, NULL);
SELECT id, COALESCE(price, 'IS_NULL') FROM products;
-- 1|IS_NULL
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100;
-- Does not raise an error if the type already exists
CREATE TYPE IF NOT EXISTS cents BASE integer
ENCODE value * 100
DECODE value / 100;
SELECT count(*) FROM sqlite_turso_types WHERE name = 'cents';
-- 1
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100;
DROP TYPE cents;
SELECT count(*) FROM sqlite_turso_types WHERE name = 'cents';
-- 0
-- DROP TYPE IF EXISTS does not raise an error for missing types
DROP TYPE IF EXISTS nonexistent;
CREATE TYPE cents BASE integer
ENCODE value * 100
DECODE value / 100
DEFAULT 0;
SELECT name, sql FROM sqlite_turso_types WHERE name = 'cents';
-- cents|CREATE TYPE cents BASE integer ENCODE value * 100 DECODE value / 100 DEFAULT 0
sqlite_turso_types system table. SQLite-based tools that open a Turso database may not understand this table.Turso extension. Vector search is a Turso-specific feature and is not available in standard SQLite.
Turso supports vector operations for building similarity search and semantic search applications. Vectors are fixed-length arrays of floating-point numbers that represent data points in a high-dimensional space. They are commonly produced by machine learning embedding models that convert text, images, or other data into numerical representations where similar items have nearby vectors.
Vectors are stored as compact binary BLOBs and can be compared using built-in distance functions to find the most similar items.
A typical workflow is:
F32_BLOB/F64_BLOB type hint) to hold embeddings.vector32() or vector64() creation functions. These functions convert a JSON text array into a compact binary representation.vector_distance_cos, vector_distance_l2, etc.) with ORDER BY and LIMIT to find the nearest neighbors.Vector indexes are not yet supported. All vector searches currently use brute-force scanning, which means search time scales linearly with the number of rows. For small to medium datasets (up to hundreds of thousands of rows), brute-force search is often fast enough. For larger datasets, consider partitioning or pre-filtering with WHERE clauses on non-vector columns.
Turso supports three vector formats:
| Format | Function | Bytes per dimension | Description |
|---|---|---|---|
| Float32 dense | vector32() or vector() | 4 | 32-bit floating-point. Default format, good balance of precision and size. |
| Float64 dense | vector64() | 8 | 64-bit floating-point. Higher precision, double the storage. |
| Float32 sparse | vector32_sparse() | 4 (non-zero only) | 32-bit floating-point sparse representation. Only stores non-zero dimensions. |
The vector() function is an alias for vector32().
Vectors are stored on disk as BLOBs. You can use a plain BLOB column or the optional type hints F32_BLOB(n) and F64_BLOB(n), where n is the number of dimensions. The type hint is purely documentary and does not enforce a dimension constraint at the storage layer.
CREATE TABLE documents (
id INTEGER PRIMARY KEY,
content TEXT,
embedding F32_BLOB(4)
);
vector(text) -> blob****vector32(text) -> blob
Parse a JSON array of numbers into a 32-bit floating-point vector BLOB. The input must be a single-quoted string containing a JSON array of numeric values (integers or floats). vector() is an alias for vector32(). Each number in the array becomes one dimension of the vector. The values must be finite (no NaN or infinity).
SELECT vector_extract(vector32('[1.0, 2.0, 3.0]')); -- [1,2,3]
SELECT vector_extract(vector('[4.5, 5.5, 6.5]')); -- [4.5,5.5,6.5]
vector64(text) -> blob
Parse a JSON array of numbers into a 64-bit floating-point vector BLOB. Use this when you need higher numerical precision than 32-bit floats provide. The trade-off is double the storage per dimension (8 bytes instead of 4 bytes).
SELECT vector_extract(vector64('[1.0, 2.0, 3.0]')); -- [1,2,3]
vector32_sparse(text) -> blob
Parse a JSON array into a sparse 32-bit floating-point vector. Zero-valued dimensions are omitted from the binary representation, reducing storage for vectors with many zeros. This is particularly useful for bag-of-words models, TF-IDF representations, or any embedding scheme where most dimensions are zero. The full dimensionality of the vector is preserved – vector_extract will reconstruct the zeros – but only non-zero values are stored on disk.
SELECT vector_extract(vector32_sparse('[1.0, 0.0, 3.0]')); -- [1,0,3]
All distance functions take two vector BLOBs and return a floating-point number. Both vectors must have the same number of dimensions. Passing vectors with different dimension counts will result in an error.
When using distance functions for nearest-neighbor search, sort results in ascending order (ORDER BY distance ASC or simply ORDER BY distance) so that the most similar items appear first. All distance functions in Turso follow this convention: smaller values mean more similar vectors.
vector_distance_cos(v1, v2) -> real
Compute the cosine distance between two vectors, defined as 1 - cosine_similarity. The result ranges from 0 (identical direction) to 2 (opposite direction). A value of 1 means the vectors are orthogonal.
Cosine distance is typically preferred for text and document embeddings because it measures the angle between vectors rather than their magnitude.
-- Identical vectors: distance near 0
SELECT vector_distance_cos(
vector32('[1.0, 0.0]'),
vector32('[1.0, 0.0]')
); -- ~0.0
-- Orthogonal vectors: distance = 1
SELECT vector_distance_cos(
vector32('[1.0, 0.0, 0.0]'),
vector32('[0.0, 0.0, 1.0]')
); -- 1.0
-- Opposite vectors: distance near 2
SELECT vector_distance_cos(
vector32('[1.0, 0.0]'),
vector32('[-1.0, 0.0]')
); -- ~2.0
vector_distance_l2(v1, v2) -> real
Compute the Euclidean (L2) distance between two vectors. This is the straight-line distance between two points in the vector space. The result is always non-negative, with 0 meaning the vectors are identical. L2 distance is sensitive to vector magnitude, so it works best when vectors are on a similar scale.
SELECT vector_distance_l2(
vector32('[0.0, 0.0]'),
vector32('[3.0, 4.0]')
); -- 5.0
SELECT vector_distance_l2(
vector32('[1.0, 2.0, 3.0]'),
vector32('[4.0, 5.0, 6.0]')
); -- 5.19615242270663
vector_distance_dot(v1, v2) -> real
Compute the negative dot product between two vectors. A more negative result means the vectors are more similar (larger dot product). The result is negated so that sorting in ascending order returns the most similar vectors first.
SELECT vector_distance_dot(
vector32('[1.0, 2.0]'),
vector32('[3.0, 4.0]')
); -- -11.0
SELECT vector_distance_dot(
vector32('[1.0, 0.0]'),
vector32('[0.0, 1.0]')
); -- 0.0
vector_distance_jaccard(v1, v2) -> real
Compute the Jaccard distance between two vectors treated as sets. The Jaccard distance is defined as 1 - (intersection / union) where non-zero elements are treated as set members. This is useful when vector dimensions represent binary or categorical features, such as presence/absence of tags or keywords.
SELECT vector_distance_jaccard(
vector32('[1.0, 0.0, 1.0]'),
vector32('[0.0, 1.0, 1.0]')
); -- 0.666666656732559
vector_extract(blob) -> text
Convert a vector BLOB back into a human-readable JSON array. Useful for inspecting stored vectors.
SELECT vector_extract(vector32('[1.0, 2.0, 3.0]')); -- [1,2,3]
vector_concat(v1, v2) -> blob
Concatenate two vectors into a single vector. Both vectors must be the same type.
SELECT vector_extract(
vector_concat(
vector32('[1.0, 2.0]'),
vector32('[3.0, 4.0]')
)
); -- [1,2,3,4]
vector_slice(v, start, end) -> blob
Extract a contiguous sub-vector from dimension index start (inclusive) to end (exclusive). Indices are zero-based.
SELECT vector_extract(
vector_slice(vector32('[10.0, 20.0, 30.0, 40.0, 50.0]'), 0, 3)
); -- [10,20,30]
SELECT vector_extract(
vector_slice(vector32('[10.0, 20.0, 30.0, 40.0, 50.0]'), 2, 5)
); -- [30,40,50]
CREATE TABLE documents (
id INTEGER PRIMARY KEY,
content TEXT,
embedding BLOB
);
INSERT INTO documents VALUES
(1, 'Introduction to databases', vector32('[0.1, 0.2, 0.3, 0.4]'));
INSERT INTO documents VALUES
(2, 'SQL query optimization', vector32('[0.2, 0.1, 0.4, 0.3]'));
INSERT INTO documents VALUES
(3, 'Vector similarity search', vector32('[0.4, 0.3, 0.2, 0.1]'));
SELECT
id,
content,
vector_distance_cos(embedding, vector32('[0.15, 0.25, 0.35, 0.45]')) AS distance
FROM documents
ORDER BY distance
LIMIT 3;
-- 1|Introduction to databases|0.00203462258422431
-- 2|SQL query optimization|0.0590611621657705
-- 3|Vector similarity search|0.287167575420696
SELECT
id,
content,
vector_distance_l2(embedding, vector32('[0.15, 0.25, 0.35, 0.45]')) AS distance
FROM documents
ORDER BY distance
LIMIT 3;
-- 1|Introduction to databases|0.0999999802559595
-- 2|SQL query optimization|0.223606791085977
-- 3|Vector similarity search|0.458257562341844
Find all vectors within a certain distance rather than a fixed number of results:
SELECT
id,
content,
vector_distance_cos(embedding, vector32('[0.1, 0.2, 0.3, 0.4]')) AS distance
FROM documents
WHERE vector_distance_cos(embedding, vector32('[0.1, 0.2, 0.3, 0.4]')) < 0.01
ORDER BY distance;
SELECT id, vector_extract(embedding) FROM documents;
-- 1|[0.1,0.2,0.3,0.4]
-- 2|[0.2,0.1,0.4,0.3]
-- 3|[0.4,0.3,0.2,0.1]
Vector search can be combined with standard SQL WHERE clauses. Non-vector predicates are applied before or alongside the distance calculation, reducing the number of vectors that need to be compared:
-- Assuming a 'category' column exists alongside the embedding
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT,
category TEXT,
embedding BLOB
);
INSERT INTO products VALUES (1, 'Running Shoes', 'footwear', vector32('[0.8, 0.1, 0.3]'));
INSERT INTO products VALUES (2, 'Hiking Boots', 'footwear', vector32('[0.7, 0.2, 0.5]'));
INSERT INTO products VALUES (3, 'Cotton T-Shirt', 'clothing', vector32('[0.1, 0.9, 0.2]'));
INSERT INTO products VALUES (4, 'Trail Sneakers', 'footwear', vector32('[0.75, 0.15, 0.4]'));
-- Search only within the 'footwear' category
SELECT
name,
vector_distance_cos(embedding, vector32('[0.8, 0.1, 0.35]')) AS distance
FROM products
WHERE category = 'footwear'
ORDER BY distance
LIMIT 2;
A complete example modeling an article recommendation system:
-- Create schema
CREATE TABLE articles (
id INTEGER PRIMARY KEY,
title TEXT,
embedding BLOB
);
-- Insert pre-computed embeddings from an external model
INSERT INTO articles VALUES
(1, 'Database Fundamentals', vector32('[0.12, -0.34, 0.56, 0.78]'));
INSERT INTO articles VALUES
(2, 'Machine Learning Basics', vector32('[0.23, 0.45, -0.67, 0.89]'));
INSERT INTO articles VALUES
(3, 'Web Development Guide', vector32('[0.34, -0.12, 0.78, -0.56]'));
INSERT INTO articles VALUES
(4, 'Data Structures', vector32('[0.11, -0.33, 0.55, 0.77]'));
-- Search: given a query embedding, find the 3 most similar articles
SELECT
a.id,
a.title,
vector_distance_cos(a.embedding, vector32('[0.10, -0.30, 0.50, 0.70]')) AS distance
FROM articles a
ORDER BY distance
LIMIT 3;
-- 4|Data Structures|0.0
-- 1|Database Fundamentals|4.52537882702912e-05
-- 2|Machine Learning Basics|0.843018284332676
Different distance functions suit different use cases:
| Function | Best for | Range | Notes |
|---|---|---|---|
vector_distance_cos | Text/document embeddings | 0 to 2 | Ignores vector magnitude; focuses on direction. Most common for NLP embeddings. |
vector_distance_l2 | Spatial data, image features | 0 to infinity | Sensitive to magnitude. Good when absolute position matters. |
vector_distance_dot | Normalized embeddings, ranking | negative infinity to positive infinity | Returns negated dot product so ascending sort gives best matches. |
vector_distance_jaccard | Binary/categorical features | 0 to 1 | Treats vectors as sets. Best for presence/absence features. |
When in doubt, start with vector_distance_cos. It is the most widely used metric for text embeddings produced by models like OpenAI, Cohere, and Sentence Transformers.
Since vector indexes are not yet implemented, keep the following in mind:
vector32 over vector64 unless double precision is required. It uses half the storage (4 bytes vs. 8 bytes per dimension).vector32_sparse when vectors have many zero-valued dimensions to reduce storage.dimensions * bytes_per_dimension. A 1536-dimensional vector32 column uses about 6 KB per row, while a 384-dimensional column uses about 1.5 KB per row. Choose the smallest embedding model that meets your accuracy requirements.Vector search is a Turso extension and is not available in standard SQLite. The vector(), vector32(), vector64(), vector32_sparse(), vector_extract(), vector_distance_cos(), vector_distance_l2(), vector_distance_dot(), vector_distance_jaccard(), vector_concat(), and vector_slice() functions are all Turso-specific.
Turso extension. Change Data Capture (CDC) is a Turso-specific feature and is not available in standard SQLite.
Change Data Capture (CDC) records every data modification (insert, update, delete) into a dedicated tracking table. This is useful for building reactive applications, replicating data between systems, maintaining audit logs, and implementing event-driven architectures.
CDC is enabled per connection using a PRAGMA. Once enabled, every INSERT, UPDATE, and DELETE on user tables automatically generates a corresponding row in the CDC table. The level of detail captured depends on the chosen mode.
Note: This feature is currently marked as unstable, meaning the PRAGMA name may change in future versions. The functionality itself is reliable for use.
PRAGMA unstable_capture_data_changes_conn('mode[,table_name]');
turso_cdc when omitted.| Mode | Description |
|---|---|
off | Disable CDC for this connection. No further changes are recorded. |
id | Record only the primary key (or rowid) of every changed row. The before, after, and updates columns are NULL. |
before | Record the full row state before each change. Populated for updates and deletes. Inserts still record only the id. |
after | Record the full row state after each change. Populated for inserts and updates. Deletes still record only the id. |
full | Record before state, after state, and an updates blob describing which columns changed. The most detailed mode. |
When CDC is first enabled and a DML statement is executed, Turso automatically creates the tracking table (default name turso_cdc) if it does not already exist. The table has the following schema:
(change_id INTEGER PRIMARY KEY AUTOINCREMENT,
change_time INTEGER,
change_type INTEGER,
table_name TEXT,
id,
before BLOB,
after BLOB,
updates BLOB)
| Column | Type | Description |
|---|---|---|
change_id | INTEGER | Auto-incrementing unique identifier for each change entry. |
change_time | INTEGER | Unix epoch timestamp when the change was recorded. |
change_type | INTEGER | 1 for INSERT, 0 for UPDATE, -1 for DELETE. |
table_name | TEXT | Name of the table that was modified. Schema changes appear as changes to sqlite_schema. |
id | (any) | The primary key or rowid of the affected row. |
before | BLOB | A binary record containing the row state before the change. Populated only in before and full modes, and only for updates and deletes. NULL otherwise. |
after | BLOB | A binary record containing the row state after the change. Populated only in after and full modes, and only for inserts and updates. NULL otherwise. |
updates | BLOB | A binary record describing per-column update details. Populated only in full mode for updates. NULL otherwise. |
| Value | Meaning |
|---|---|
1 | INSERT |
0 | UPDATE |
-1 | DELETE |
Enable CDC with the desired mode:
-- Capture only primary keys of changed rows
PRAGMA unstable_capture_data_changes_conn('id');
-- Capture full before and after state
PRAGMA unstable_capture_data_changes_conn('full');
-- Store changes in a custom table instead of the default turso_cdc
PRAGMA unstable_capture_data_changes_conn('full,audit_log');
Turn off CDC for the current connection. Changes made after this point are not recorded.
PRAGMA unstable_capture_data_changes_conn('off');
The CDC table is a regular table that can be queried with standard SQL.
-- View all captured changes
SELECT * FROM turso_cdc;
-- View only inserts
SELECT * FROM turso_cdc WHERE change_type = 1;
-- View only updates
SELECT * FROM turso_cdc WHERE change_type = 0;
-- View only deletes
SELECT * FROM turso_cdc WHERE change_type = -1;
-- View changes for a specific table
SELECT * FROM turso_cdc WHERE table_name = 'users';
-- View recent changes (last hour)
SELECT * FROM turso_cdc
WHERE change_time > unixepoch() - 3600;
You can also delete old entries to keep the table from growing indefinitely. Modifications to the CDC table itself are not captured, so deleting rows from turso_cdc does not generate additional CDC entries.
-- Purge entries older than 24 hours
DELETE FROM turso_cdc
WHERE change_time < unixepoch() - 86400;
Turso provides two scalar functions to decode the binary records stored in the before, after, and updates columns.
table_columns_json_array(table_name) -> text
Return a JSON array of column names for the given table. This can be used as the first argument to bin_record_json_object to decode a CDC binary record.
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price REAL);
SELECT table_columns_json_array('products');
-- ["id","name","price"]
bin_record_json_object(columns_json, blob) -> text
Decode a binary record blob into a JSON object, using the column names from the first argument to label each field. The first argument should be a JSON array of column name strings (typically from table_columns_json_array).
-- Decode the "after" column of a CDC row
SELECT bin_record_json_object(
table_columns_json_array('products'),
"after"
) FROM turso_cdc
WHERE table_name = 'products' AND change_type = 1;
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT,
email TEXT
);
-- Enable CDC in id mode
PRAGMA unstable_capture_data_changes_conn('id');
-- Make some changes
INSERT INTO users VALUES (1, 'Alice', '[email protected]');
INSERT INTO users VALUES (2, 'Bob', '[email protected]');
UPDATE users SET email = '[email protected]' WHERE id = 1;
DELETE FROM users WHERE id = 2;
-- View the captured changes
SELECT change_id, change_type, table_name, id
FROM turso_cdc;
-- 1|1|users|1
-- 2|1|users|2
-- 3|0|users|1
-- 4|-1|users|2
CREATE TABLE inventory (
id INTEGER PRIMARY KEY,
qty INTEGER
);
PRAGMA unstable_capture_data_changes_conn('full');
INSERT INTO inventory VALUES (1, 100);
UPDATE inventory SET qty = 80 WHERE id = 1;
-- The INSERT has an "after" record but no "before"
-- The UPDATE has both "before" and "after" records, plus "updates"
SELECT change_id, change_type, "before" IS NOT NULL AS has_before,
"after" IS NOT NULL AS has_after, updates IS NOT NULL AS has_updates
FROM turso_cdc;
-- 1|1|0|1|0
-- 2|0|1|1|1
CREATE TABLE orders (id INTEGER PRIMARY KEY, total REAL);
-- Store changes in a table named "order_audit"
PRAGMA unstable_capture_data_changes_conn('id,order_audit');
INSERT INTO orders VALUES (1, 99.95);
SELECT change_id, change_type, table_name, id FROM order_audit;
-- 1|1|orders|1
CDC is configured per connection. Each connection can use a different mode and a different CDC table. Changes made by a connection that does not have CDC enabled are not recorded.
-- Connection 1 captures to "audit_log"
PRAGMA unstable_capture_data_changes_conn('full,audit_log');
-- Connection 2 captures to "sync_queue"
PRAGMA unstable_capture_data_changes_conn('id,sync_queue');
-- Changes from Connection 1 go to "audit_log"
-- Changes from Connection 2 go to "sync_queue"
Only changes executed by the connection that enabled CDC are recorded. If another connection modifies the same table without CDC enabled, those changes do not appear in any CDC table.
In full mode, DDL statements (CREATE TABLE, DROP TABLE, ALTER TABLE, CREATE INDEX, DROP INDEX) are also tracked as changes to sqlite_schema.
PRAGMA unstable_capture_data_changes_conn('full');
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT);
-- Recorded as an INSERT into sqlite_schema
DROP TABLE products;
-- Recorded as a DELETE from sqlite_schema
CDC respects transaction boundaries. Changes within a transaction are recorded when the transaction commits. If a transaction is rolled back, no CDC entries are created for those changes.
CREATE TABLE accounts (id INTEGER PRIMARY KEY, balance REAL);
PRAGMA unstable_capture_data_changes_conn('id');
BEGIN;
INSERT INTO accounts VALUES (1, 1000.00);
INSERT INTO accounts VALUES (2, 2000.00);
COMMIT;
-- Both inserts are recorded after COMMIT
SELECT change_id, change_type, id FROM turso_cdc;
-- 1|1|1
-- 2|1|2
When a statement fails (for example, due to a constraint violation), neither the data change nor the CDC entry is recorded. Only successful operations appear in the CDC table.
CREATE TABLE tags (id INTEGER PRIMARY KEY, label TEXT UNIQUE);
PRAGMA unstable_capture_data_changes_conn('id');
INSERT INTO tags (label) VALUES ('urgent'), ('review');
-- This fails because 'urgent' already exists:
-- INSERT INTO tags (label) VALUES ('new'), ('other'), ('urgent');
INSERT INTO tags (label) VALUES ('done');
-- Only the successful inserts are recorded
SELECT change_id, change_type, table_name FROM turso_cdc;
-- 1|1|tags
-- 2|1|tags
-- 3|1|tags
Change Data Capture is a Turso extension. It is not available in standard SQLite. The PRAGMA unstable_capture_data_changes_conn, the default turso_cdc table, and the helper functions table_columns_json_array() and bin_record_json_object() are all Turso-specific.
Turso Extension. This feature is not available in SQLite. Materialized views must be explicitly enabled with the
--experimental-viewsflag.
CREATE MATERIALIZED VIEW [IF NOT EXISTS] view-name [(column-name [, ...])]
AS select-statement
DROP VIEW [IF EXISTS] view-name
Materialized views in Turso are automatically updating database objects that store the results of a query and keep them current in real-time. Unlike traditional materialized views found in other databases that require manual refresh commands, Turso uses Incremental View Maintenance (IVM) to update materialized views as the underlying data changes.
When you insert, update, or delete rows in a base table, any dependent materialized views are updated within the same transaction. Only the incremental changes are processed – not the entire query – making updates efficient even for complex aggregations over large datasets. Because the view is updated inside the same transaction as the base table modification, materialized views are always consistent and never show stale data.
Materialized views are an experimental feature. You must pass the --experimental-views flag when starting the Turso CLI:
tursodb --experimental-views database.db
Without this flag, CREATE MATERIALIZED VIEW statements will fail with an error.
Traditional materialized views store a snapshot of query results that becomes stale as underlying data changes. Re-executing the entire query to refresh the view is costly for large datasets.
Turso takes a different approach. Instead of re-computing the entire view, IVM tracks what has changed and updates only the affected portions:
This is particularly powerful for aggregations. If a view computes SUM over millions of rows, inserting one new row only requires adding that single value to the existing sum – not re-summing all rows.
A materialized view is created with CREATE MATERIALIZED VIEW. The AS clause contains a SELECT statement that defines the view’s contents. When the view is created, the query is executed once to populate the initial data, and then incremental maintenance keeps it up to date.
CREATE TABLE sales (product_id INTEGER, quantity INTEGER, day INTEGER);
INSERT INTO sales VALUES
(1, 2, 1), (2, 5, 1), (1, 1, 2),
(3, 1, 2), (2, 3, 3), (1, 1, 3);
CREATE MATERIALIZED VIEW daily_totals AS
SELECT day, SUM(quantity) as total, COUNT(*) as transactions
FROM sales
GROUP BY day;
SELECT * FROM daily_totals ORDER BY day;
-- 1|7.0|2
-- 2|2.0|2
-- 3|4.0|2
Once created, a materialized view is queried like any regular table.
Materialized views support a wide range of SQL constructs in their defining query:
SUM, COUNT, AVG, MIN, MAX (including DISTINCT variants like COUNT(DISTINCT ...), SUM(DISTINCT ...))b + a, min(a, b))WHERE clausesMaterialized views stay current automatically. Every INSERT, UPDATE, and DELETE on a base table incrementally updates all dependent materialized views.
CREATE TABLE orders (
order_id INTEGER PRIMARY KEY,
customer_id INTEGER,
amount INTEGER
);
INSERT INTO orders VALUES (1, 100, 50), (2, 200, 75);
CREATE MATERIALIZED VIEW customer_totals AS
SELECT customer_id, SUM(amount) as total, COUNT(*) as order_count
FROM orders
GROUP BY customer_id;
SELECT * FROM customer_totals ORDER BY customer_id;
-- 100|50.0|1
-- 200|75.0|1
-- Insert a new order for customer 100
INSERT INTO orders VALUES (3, 100, 25);
SELECT * FROM customer_totals ORDER BY customer_id;
-- 100|75.0|2
-- 200|75.0|1
-- Continuing from above: update the amount of order 2
UPDATE orders SET amount = 100 WHERE order_id = 2;
SELECT * FROM customer_totals ORDER BY customer_id;
-- 100|75.0|2
-- 200|100.0|1
-- Continuing from above: delete an order
DELETE FROM orders WHERE order_id = 1;
SELECT * FROM customer_totals ORDER BY customer_id;
-- 100|25.0|1
-- 200|100.0|1
Materialized views are updated inside the same transaction as the base table modification. This guarantees:
CREATE TABLE sales (product_id INTEGER, amount INTEGER);
INSERT INTO sales VALUES (1, 100), (1, 200), (2, 150), (2, 250);
CREATE MATERIALIZED VIEW product_totals AS
SELECT product_id, SUM(amount) as total, COUNT(*) as cnt
FROM sales
GROUP BY product_id;
SELECT * FROM product_totals ORDER BY product_id;
-- 1|300.0|2
-- 2|400.0|2
BEGIN;
INSERT INTO sales VALUES (1, 50), (3, 300);
SELECT * FROM product_totals ORDER BY product_id;
-- 1|350.0|3
-- 2|400.0|2
-- 3|300.0|1
ROLLBACK;
-- After rollback, the view returns to its previous state
SELECT * FROM product_totals ORDER BY product_id;
-- 1|300.0|2
-- 2|400.0|2
Materialized views can be defined over joins between two or more tables. Incremental maintenance applies to changes on any of the joined tables.
CREATE TABLE customers (id INTEGER PRIMARY KEY, name TEXT, city TEXT);
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer_id INTEGER, product_id INTEGER, quantity INTEGER);
CREATE TABLE products (id INTEGER PRIMARY KEY, name TEXT, price INTEGER);
INSERT INTO customers VALUES (1, 'Alice', 'NYC'), (2, 'Bob', 'LA');
INSERT INTO products VALUES (1, 'Widget', 10), (2, 'Gadget', 20);
INSERT INTO orders VALUES (1, 1, 1, 5), (2, 1, 2, 3), (3, 2, 1, 2);
CREATE MATERIALIZED VIEW sales_summary AS
SELECT c.name AS customer_name, p.name AS product_name, o.quantity
FROM customers c
JOIN orders o ON c.id = o.customer_id
JOIN products p ON o.product_id = p.id;
SELECT * FROM sales_summary ORDER BY customer_name, product_name;
-- Alice|Gadget|3
-- Alice|Widget|5
-- Bob|Widget|2
Inserting into any of the three tables will incrementally update the view.
The DISTINCT keyword is supported both at the query level and inside aggregate functions.
CREATE TABLE events (id INTEGER PRIMARY KEY, category TEXT, status TEXT);
INSERT INTO events VALUES (1, 'A', 'open'), (2, 'B', 'open'),
(3, 'A', 'open'), (4, 'B', 'closed');
CREATE MATERIALIZED VIEW unique_categories AS
SELECT DISTINCT category FROM events;
SELECT * FROM unique_categories ORDER BY category;
-- A
-- B
CREATE TABLE orders (id INTEGER PRIMARY KEY, customer TEXT, product TEXT, amount INTEGER);
INSERT INTO orders VALUES (1, 'Alice', 'Widget', 10), (2, 'Alice', 'Gadget', 20),
(3, 'Alice', 'Widget', 15), (4, 'Bob', 'Widget', 30), (5, 'Bob', 'Widget', 25);
CREATE MATERIALIZED VIEW customer_stats AS
SELECT customer, COUNT(DISTINCT product) AS unique_products,
SUM(amount) AS total_amount
FROM orders
GROUP BY customer;
SELECT * FROM customer_stats ORDER BY customer;
-- Alice|2|45.0
-- Bob|1|55.0
Materialized views can use UNION and UNION ALL to combine results from multiple queries:
CREATE TABLE employees (id INTEGER PRIMARY KEY, name TEXT, department TEXT);
CREATE TABLE contractors (id INTEGER PRIMARY KEY, name TEXT, agency TEXT);
INSERT INTO employees VALUES (1, 'Alice', 'Engineering'), (2, 'Bob', 'Marketing');
INSERT INTO contractors VALUES (1, 'Charlie', 'TechCorp'), (2, 'Diana', 'DesignCo');
CREATE MATERIALIZED VIEW all_workers AS
SELECT name, department AS affiliation FROM employees
UNION ALL
SELECT name, agency AS affiliation FROM contractors;
SELECT * FROM all_workers ORDER BY name;
-- Alice|Engineering
-- Bob|Marketing
-- Charlie|TechCorp
-- Diana|DesignCo
Materialized views are dropped using DROP VIEW, the same syntax used for regular views. This removes the view definition and all associated internal state tables.
DROP VIEW sales_summary;
After dropping, the view can be recreated with the same or a different definition:
CREATE MATERIALIZED VIEW sales_summary AS
SELECT product_id, SUM(amount) AS revenue
FROM sales
GROUP BY product_id;
Materialized views trade write-time overhead for read-time performance. Each INSERT, UPDATE, or DELETE on a base table must also update any dependent materialized views. Consider these trade-offs when designing your schema:
As an experimental feature, materialized views have some limitations:
--experimental-views flag must be provided at startup.DROP VIEW, not a separate DROP MATERIALIZED VIEW statement.--experimental-views flag is required. The feature is experimental and behavior may change in future releases.Turso Extension. This feature is not available in SQLite. Encryption must be explicitly enabled with the
--experimental-encryptionflag.
Turso supports transparent at-rest encryption to protect database files from unauthorized access. When enabled, all data written to disk is automatically encrypted and all data read from disk is automatically decrypted, with no changes required to SQL queries or application logic.
Encryption operates at the page level: each database page is independently encrypted and authenticated. A random nonce is generated for every page write, and an authentication tag is stored alongside the ciphertext. If a page is corrupted or tampered with, decryption will fail with an error rather than returning garbage data.
Encrypted databases use a modified file header. The first 16 bytes of a standard SQLite database contain the magic string SQLite format 3\0. In an encrypted Turso database, these bytes are replaced with a Turso-specific header that identifies the file as encrypted and records the cipher algorithm. The rest of the database header (bytes 16 through 99) remains unencrypted but is protected by authenticated encryption, so any tampering with the header is detected on read.
Encryption is an experimental feature. You must pass the --experimental-encryption flag when starting the Turso CLI:
tursodb --experimental-encryption database.db
Without this flag, the PRAGMA cipher and PRAGMA hexkey statements are not available.
Turso supports eight authenticated encryption algorithms across two families. All ciphers provide both confidentiality and integrity verification.
| Cipher Name | Key Size | Description |
|---|---|---|
aes128gcm | 16 bytes (128-bit) | AES-128 in Galois/Counter Mode |
aes256gcm | 32 bytes (256-bit) | AES-256 in Galois/Counter Mode |
AES-GCM is a widely deployed AEAD cipher. It is a solid choice when hardware AES-NI acceleration is available.
| Cipher Name | Key Size | Description |
|---|---|---|
aegis256 | 32 bytes (256-bit) | AEGIS-256 (recommended) |
aegis128l | 16 bytes (128-bit) | AEGIS-128L |
aegis128x2 | 16 bytes (128-bit) | AEGIS-128 with 2x parallelization |
aegis128x4 | 16 bytes (128-bit) | AEGIS-128 with 4x parallelization |
aegis256x2 | 32 bytes (256-bit) | AEGIS-256 with 2x parallelization |
aegis256x4 | 32 bytes (256-bit) | AEGIS-256 with 4x parallelization |
AEGIS ciphers generally offer better performance than AES-GCM while maintaining strong security properties. aegis256 is the recommended default for most use cases. The x2 and x4 variants exploit instruction-level parallelism and may perform better on CPUs with wide execution pipelines.
Cipher names are case-insensitive and accept multiple separator styles. For example, aegis256, aegis-256, and aegis_256 all refer to the same algorithm. Similarly, aes128gcm, aes-128-gcm, and aes_128_gcm are equivalent.
Keys are provided as hexadecimal strings. Use OpenSSL or any cryptographically secure random number generator:
# Generate a 256-bit key (32 bytes) -- for aes256gcm, aegis256, aegis256x2, aegis256x4
openssl rand -hex 32
# Generate a 128-bit key (16 bytes) -- for aes128gcm, aegis128l, aegis128x2, aegis128x4
openssl rand -hex 16
A 256-bit key produces a 64-character hex string. A 128-bit key produces a 32-character hex string.
Store your encryption key securely. If the key is lost, the encrypted database cannot be recovered. There is no key recovery mechanism.
There are two ways to configure encryption: PRAGMAs in the SQL shell, or URI parameters on the command line.
Start Turso with the encryption flag, then set the cipher and key before creating any tables:
tursodb --experimental-encryption database.db
PRAGMA cipher = 'aegis256';
PRAGMA hexkey = '2d7a30108d3eb3e45c90a732041fe54778bdcf707c76749fab7da335d1b39c1d';
-- The database is now encrypted. Use it normally.
CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT);
INSERT INTO users VALUES (1, 'Alice');
SELECT * FROM users;
Both PRAGMA cipher and PRAGMA hexkey must be set before any other database operations. The encryption context is established when both values are present. Once set, the cipher and key cannot be changed within the same session.
Specify the cipher and key directly in the database URI:
tursodb --experimental-encryption \
"file:database.db?cipher=aegis256&hexkey=2d7a30108d3eb3e45c90a732041fe54778bdcf707c76749fab7da335d1b39c1d"
This is equivalent to setting the PRAGMAs and is the preferred method for scripting and automation.
To open an existing encrypted database, you must provide the correct cipher and key. The recommended approach is URI parameters:
tursodb --experimental-encryption \
"file:database.db?cipher=aegis256&hexkey=2d7a30108d3eb3e45c90a732041fe54778bdcf707c76749fab7da335d1b39c1d"
Alternatively, open the file and set PRAGMAs before any queries:
tursodb --experimental-encryption database.db
PRAGMA cipher = 'aegis256';
PRAGMA hexkey = '2d7a30108d3eb3e45c90a732041fe54778bdcf707c76749fab7da335d1b39c1d';
-- Database is now accessible
SELECT * FROM users;
Opening an encrypted database without providing the correct cipher and key will fail. Providing the wrong cipher or the wrong key will also fail. Turso does not silently return corrupted data.
You can query the current cipher with:
PRAGMA cipher;
-- Returns the cipher name, e.g. 'aegis256'
This returns the cipher algorithm configured for the current session. If no cipher has been set, nothing is returned.
| PRAGMA | Type | Description |
|---|---|---|
PRAGMA cipher = 'name' | Write | Set the encryption cipher for the session. Must be set before any I/O. |
PRAGMA cipher | Read | Return the current cipher name. |
PRAGMA hexkey = 'hex_string' | Write | Set the encryption key as a hex-encoded string. Must match the cipher’s key size. |
The key size must match the cipher:
| Key Size | Hex String Length | Ciphers |
|---|---|---|
| 16 bytes | 32 characters | aes128gcm, aegis128l, aegis128x2, aegis128x4 |
| 32 bytes | 64 characters | aes256gcm, aegis256, aegis256x2, aegis256x4 |
# Generate a key
KEY=$(openssl rand -hex 32)
# Create an encrypted database
tursodb --experimental-encryption \
"file:secret.db?cipher=aegis256&hexkey=$KEY" <<'SQL'
CREATE TABLE secrets (id INTEGER PRIMARY KEY, label TEXT, value TEXT);
INSERT INTO secrets VALUES (1, 'api_key', 'sk-abc123');
INSERT INTO secrets VALUES (2, 'db_password', 'hunter2');
SELECT * FROM secrets;
SQL
# Try opening without credentials -- this will fail
tursodb --experimental-encryption secret.db <<'SQL'
SELECT * FROM secrets;
SQL
# Error: database is encrypted or is not a database
KEY128=$(openssl rand -hex 16)
tursodb --experimental-encryption \
"file:fast.db?cipher=aes128gcm&hexkey=$KEY128" <<'SQL'
CREATE TABLE logs (id INTEGER PRIMARY KEY, message TEXT);
INSERT INTO logs VALUES (1, 'System started');
SELECT * FROM logs;
SQL
This error appears when:
This means the hexkey value is not valid hexadecimal. Ensure the string contains only characters 0-9 and a-f (case-insensitive) and that its length matches the cipher’s key size (32 hex characters for 16-byte keys, 64 hex characters for 32-byte keys).
The cipher and key can only be set once per session. If you need to change encryption parameters, close the connection and open a new one.
--experimental-encryption flag is required. The feature is experimental and the on-disk format may change in future releases.