docs/design/2021-04-26-lock-view.md
This document describes the design of the feature Lock View, which provides tools to analyze problems about transaction's lock waiting, lock contentions and deadlocks.
Currently, it's very hard to analyze lock contentions and deadlocks for transactions. One may need to enable general log, try to reproduce the problem, and try to analyze the log to find the cause, which is very difficult and inconvenient. Even if you have the log, many conflict's information contains only a start_ts, which won't tell you much useful information (like the SQLs in the transaction) for reproducing the problem. We also found that this way of analyzing is not feasible in some scenarios. It's highly required to provide some better approach to analyze these kinds of problems.
Several tables will be provided in information_schema. Some tables has both local version (fetches data on the current TiDB node) and global version (fetches data among the whole cluster), and the global version's table name has the "CLUSTER_" prefix.
(CLUSTER_)TIDB_TRX| Field | Type | Comment |
|---|---|---|
TRX_ID | unsigned bigint | The transaction ID (aka. start ts) |
TRX_STARTED | time | Human readable start time of the transaction |
CURRENT_SQL_DIGEST | varchar(64) | The digest of the current executing SQL statement |
ALL_SQL_DIGESTS | text | A list of all executed SQL statements' digests |
STATE | enum('Running', 'Lock waiting', 'Committing', 'RollingBack') | The state of the transaction |
WAITING_START_TIME | time | The elapsed time since the start of the current lock waiting (if any) |
SCOPE | enum('Global', 'Local') | The scope of the transaction |
ISOLATION_LEVEL | enum('REPEATABLE-READ', 'READ-COMMITTED') | |
AUTOCOMMIT | bool | |
SESSION_ID | unsigned bigint | |
USER | varchar | |
DB | varchar | |
SET_COUNT | int | Modified keys of the current transaction |
LOCKED_COUNT | int | Locked keys of the current transaction |
MEM_BUFFER_KEYS | int | Entries in transaction's membuffer |
MEM_BUFFER_BYTES | int | Size occupied by the transaction's membuffer |
infoschema/cluster.go and write the global table name with the local one.ProcessInfo, or even directly passed via the ProcessInfo struct.PROCESS privilege is needed to access the full content of this table. For users without PROCESS permission, only transactions started by the current user will be shown, and others will be filtered out, which is similar to the processlist table.DATA_LOCK_WAITS| Field | Type | Comment |
|---|---|---|
KEY | varchar | The key that's being waiting on |
TRX_ID | unsigned bigint | The current transaction that's waiting for the lock |
SQL_DIGEST | varchar(64) | The digest of the SQL that's trying to acquire the lock |
CURRENT_HOLDING_TRX_ID | unsigned bigint | The transaction that's holding the lock and blocks the current transaction |
PROCESS privilege is needed to access this table.(CLUSTER_)DEADLOCKS| Field | Type | Comment |
|---|---|---|
DEADLOCK_ID | int | There needs multiple rows to represent information of a single deadlock event. This field is used to distinguish different events. |
OCCUR_TIME | time | The physical time when the deadlock occurs |
RETRYABLE | bool | Is the deadlock retryable. TiDB tries to determine if the current statement is (indirectly) waiting for a lock locked by the current statement. |
TRY_LOCK_TRX_ID | unsigned bigint | The transaction ID (start ts) of the transaction that's trying to acquire the lock |
CURRENT_SQL_DIGEST | text | The SQL that's being blocked |
KEY | varchar | The key that's being locked, but locked by another transaction in the deadlock event |
ALL_SQL_DIGESTS | text | A list of the digest of SQL statements that the transaction has executed |
TRX_HOLDING_LOCK | unsigned bigint | The transaction that's currently holding the lock. There will be another record in the table with the same DEADLOCK_ID for that transaction. |
CLUSTER_TIDB_TRX table) when handling the deadlock error.CLUSTER_TIDB_TRX for more rich information for retryable deadlock is possible to make the performance worse. Whether it will be collected for retryable deadlock will be decided after some tests.PROCESS privilege is needed to access this table.(CLUSTER_)TRANSACTION_SUMMARY| Field | Type | Comment |
|---|---|---|
DIGEST | varchar(16) | Digest of a transaction, calculated with ALL_SQL_DIGEST |
ALL_SQL_DIGEST | text | A json array which contains all SQLs' digest executed in this kind of transaction |
TRANSACTION_ID_DIGEST, see below) after buffer is full.PROCESS privilege is needed to access this table.(CLUSTER_)TRANSACTION_ID_DIGEST| Field | Type | Comment |
|---|---|---|
DIGEST | varchar(16) | Digest of a transaction, calculated with ALL_SQL_DIGEST |
TRX_ID | bigint | The transaction ID (aka. start ts) |
PROCESS privilege is needed to access this table.To pass necessary information between TiDB and TiKV to make this feature possible, there needs some additional information carried in the protocol defined in kvproto.
deadlockpb:
message WaitForEntry {
...
+ bytes key = ...;
+ bytes resource_group_tag = ...;
}
message DeadlockResponse {
...
+ repeated WaitiForEntry wait_chain = ...;
}
kvrpcpb:
message Context {
...
+ bytes resource_group_tag = ...;
}
message Deadlock {
...
+ repeated deadlock.WaitForEntry wait_chain = ...;
}
+ message GetLockWaitInfoRequest {
+ Context context = 1;
+ }
+
+ message GetLockWaitInfoResponse {
+ errorpb.Error region_error = 1;
+ string error = 2;
+ repeated deadlock.WaitForEntry entries = 3;
+ }
A field resource_group_tag will be added to Context. The SQL digest (and maybe more information) will be serialized and carried in this field. This field is expected to be reused by another feature named Top SQL which wants to carry SQL digest and plan to most transactional requests.
A new KV RPC GetLockWait will be added to allow getting the lock waiting status from TiKV. This is a store-level (instead of region level) request, like UnsafeDestroyRange, and those Green GC related RPCs. The request can carry some filtering options to filter out those information the user don't care about. But the current memory table implementation only allow TiDB to scan the whole table and then filter it. This may need further optimization in the future.
The locking key and resource_group_tag that comes from the Context of the pessimistic lock request is added to the deadlock detect request, and the wait chain is added to the deadlock detect response.
The wait chain will be added to the Deadlock error which is returned by the PessimisticLock request, so that when deadlock happens, the full wait chain information can be passed to TiDB.
pessimistic-txn.deadlock-history-capacitySpecifies how many recent deadlock events each TiDB node should keep.
Dynamically changeable via HTTP API.
Value: 0 to 10000
Default: 10
pessimistic-txn.deadlock-history-collect-retryableSpecifies whether to collect retryable deadlock errors to the (CLUSTER_)DEADLOCKS table.
Dynamically changeable via HTTP API.
Value: 0 (do not collect) or 1 (collect)
Default: 0
transaction-summary.transaction-id-digest-capacitySpecifies how many transaction in transaction_id_digest each TiDB node should keep.
Dynamically changeable via HTTP API.
Value: 0 to 100000
Default: 10000
transaction-summary.transaction-id-digest-min-durationSpecifies how long a transaction should be executed to make it be recorded in transaction_id_digest and considered when calculating trx_summary.
Dynamically changeable via HTTP API.
Value: 0 to 2147483647
Unit: ms
Default: 1000
transaction-summary.transaction-summary-capacitySpecifies how many transaction summary in trx_summary each TiDB node should keep.
Dynamically changeable via HTTP API.
Value: 0 to 5000
Default: 500
This feature is not expected to be incompatible with other features. During upgrading, when there are different versions of TiDB nodes exists at the same time, it's possible that the CLUSTER_ prefixed tables may encounter errors. However, since this feature is typically used by user manually, this shouldn't be a severe problem. So we don't need to care much about that.
data_locks and data_lock_waits tables.v$lock view.crdb_internal.node_transaction_statistics that shows rich information for transactions.DATA_LOCK_WAIT table doesn't shows all (logical) lock waiting.(CLUSTER_)DEADLOCKS table can't be promised to be accurate and complete.TIDB_TRX and DATA_LOCK_WAITS is not kept, which possibly makes it still difficult to investigate some kind of problems.