spring-cloud-alibaba-examples/seata-example/readme.md
This project demonstrates how to use Seata Starter to complete the distributed transaction access of Spring Cloud Alibaba application.
Seata It is Alibaba's open source distributed transaction middleware, which solves the distributed transaction problems faced by micro-service scenarios in an efficient and zero-intrusion way.
Before you run this sample, you need to complete the following steps:
Seata Notice actually supports disparate databases for different applications, but Mysql was chosen here for a simple demonstration of how Seata can be used in a Spring Cloud application.
Modify the following configuration in the files under the application.yml resources directory in the three applications account-server, order-service, storage-service to the database configuration in the local environment.
base:
config:
mdb:
hostname: your mysql server ip address
dbname: your database name for test
port: your mysql server listening port
username: your mysql server username
password: your mysql server password
Seata AT mode requires the undo_log table.
-- Note that 0.3.0+ adds unique index ux_undo_log here
CREATE TABLE `undo_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) NOT NULL,
`context` varchar(128) NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
`ext` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Initializing global_table、branch_table、lock_table、distributed_lock in the database
-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`status` TINYINT NOT NULL,
`application_id` VARCHAR(32),
`transaction_service_group` VARCHAR(32),
`transaction_name` VARCHAR(128),
`timeout` INT,
`begin_time` BIGINT,
`application_data` VARCHAR(2000),
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`xid`),
KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;
-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
`branch_id` BIGINT NOT NULL,
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`resource_group_id` VARCHAR(32),
`resource_id` VARCHAR(256),
`branch_type` VARCHAR(8),
`status` TINYINT,
`client_id` VARCHAR(64),
`application_data` VARCHAR(2000),
`gmt_create` DATETIME(6),
`gmt_modified` DATETIME(6),
PRIMARY KEY (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;
-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
`row_key` VARCHAR(128) NOT NULL,
`xid` VARCHAR(128),
`transaction_id` BIGINT,
`branch_id` BIGINT NOT NULL,
`resource_id` VARCHAR(256),
`table_name` VARCHAR(32),
`pk` VARCHAR(36),
`status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`row_key`),
KEY `idx_status` (`status`),
KEY `idx_branch_id` (`branch_id`),
KEY `idx_xid_and_branch_id` (`xid` , `branch_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;
CREATE TABLE IF NOT EXISTS `distributed_lock`
(
`lock_key` CHAR(20) NOT NULL,
`lock_value` VARCHAR(20) NOT NULL,
`expire` BIGINT,
primary key (`lock_key`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8mb4;
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);
DROP TABLE IF EXISTS `storage_tbl`;
CREATE TABLE `storage_tbl` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`commodity_code` varchar(255) DEFAULT NULL,
`count` int(11) DEFAULT 0,
PRIMARY KEY (`id`),
UNIQUE KEY (`commodity_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
DROP TABLE IF EXISTS `order_tbl`;
CREATE TABLE `order_tbl` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` varchar(255) DEFAULT NULL,
`commodity_code` varchar(255) DEFAULT NULL,
`count` int(11) DEFAULT 0,
`money` int(11) DEFAULT 0,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
DROP TABLE IF EXISTS `account_tbl`;
CREATE TABLE `account_tbl` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` varchar(255) DEFAULT NULL,
`money` int(11) DEFAULT 0,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Spring Cloud Alibaba is adapted with Nacos 3.1.0. In this example, Nacos 3.1.0 is used as the configuration center component of Seata.
Create Nacos configuration for Seata: data-id: seata.properties, Group: SEATA_GROUP (default grouping for seata 2.1.0), import
Add the following configuration items required in the application example to the seata.properties configuration file: 事务群组配置
service.vgroupMapping.default_tx_group=default # Used to specify the mapping relationship between global transaction groups and local transaction groups.
service.vgroupMapping.order-service-tx-group=default
service.vgroupMapping.account-service-tx-group=default
service.vgroupMapping.business-service-tx-group=default
service.vgroupMapping.storage-service-tx-group=default
Seata 1.5.1 supports console local access. Console address: http://127.0.0.1:7091, you can view the information about the transaction being executed and the global lock information through the built-in console of Seata. When the transaction is finished, the relevant information will be deleted.
Click Download Seata 2.5.0 Version. # The GitHub link is for the source code package, which requires Maven to compile and build the source code and generate the Seata server JAR file.
Or click Download Apache-seata-2.5.0-incubating-bin.tar.gz . # Binary package, convenient for debugging with seata-server
Modify seata-server\conf\application.yml the following configuration items in the configuration file:
group: SEATA_GROUPseata:
# nacos configuration
config:
type: nacos
nacos:
server-addr: # Nacos service addr
# group: SEATA_GROUP
# namespace: public # Nacos Namespace
username: nacos
password: nacos
data-id: seata.properties # Configuration file name in Nacos
##if use MSE Nacos with auth, mutex with username/password attribute
#access-key:
#secret-key:
registry:
# support: nacos 、 eureka 、 redis 、 zk 、 consul 、 etcd3 、 sofa 、 seata
type: nacos # Using Nacos as a registry center
nacos:
application: seata-server
# group: SEATA_GROUP
# namespace: public # Nacos namespace (make sure to set it to the actual value)
cluster: default
server-addr: # Nacos registry center address
username: nacos
password: nacos
store:
# Support: file, db, redis, raft
mode: db # Using database models
session:
mode: file
lock:
mode: file
db:
datasource: druid
db-type: mysql
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true # MySQL database connection
user: root # MySQL username
password: rootpass # MySQL password
min-conn: 10
max-conn: 100
global-table: global_table
branch-table: branch_table
lock-table: lock_table
distributed-lock-table: distributed_lock
vgroup-table: vgroup_table
query-limit: 1000
max-wait: 5000
server:
service-port: 8091 # Configure service port
max-commit-retry-timeout: -1
max-rollback-retry-timeout: -1
rollback-failed-unlock-enable: false
enable-check-auth: true
enable-parallel-request-handle: true
enable-parallel-handle-branch: false
retry-dead-threshold: 70000
xaer-nota-retry-timeout: 60000
enableParallelRequestHandle: true
applicationDataLimitCheck: true
applicationDataLimit: 64000
recovery:
committing-retry-period: 1000
async-committing-retry-period: 1000
rollbacking-retry-period: 1000
end-status-retry-period: 1000
timeout-retry-period: 1000
undo:
log-save-days: 7
log-delete-period: 86400000
session:
branch-async-queue-size: 5000 # Asynchronous branch queue size
enable-branch-async-remove: false # Enable branch asynchronous removal
ratelimit:
enable: false
bucketTokenNumPerSecond: 999999
bucketTokenMaxNum: 999999
bucketTokenInitialNum: 999999
metrics:
enabled: false
registry-type: compact
exporter-list: prometheus
exporter-prometheus-port: 9898
transport:
rpc-tc-request-timeout: 15000
enable-tc-server-batch-send-response: false
min-http-pool-size: 10
max-http-pool-size: 100
max-http-task-queue-size: 1000
http-pool-keep-alive-time: 500
shutdown:
wait: 3
thread-factory:
boss-thread-prefix: NettyBoss
worker-thread-prefix: NettyServerNIOWorker
boss-thread-size: 1
Notice Nacos 3.1.0 enables authentication. Configuration
usernameandpasswordproperties are required, otherwise login fails. For more Nacos 3.1.0 related configurations, refer tonacos-example. The Nacos service registration group when seata-server is started must be consistent with the group in the sample application, otherwise an error that seata-server cannot be found will occur! For more information about the configuration of Seata-server with Nacos as the configuration center, please refer to https://seata.io/zh-cn/docs/ops/deploy-by-docker-compose/#nacos-db.
Windows:
./seata-server.bat
Linux/Mac
sh seata-server.sh
For more configuration startup parameters, please refer to https://seata.io/zh-cn/docs/user/quickstart/#%E6%AD%A5%E9%AA%A4-4-%E5%90%AF%E5%8A%A8%E6%9C%8D%E5%8A%A1.
Notice If you change the endpoint and the registry uses the default file type, remember that in the file.conf file in each sample project, Modify the value of grouplist (when the registry. Type or config. Type in the registry. Conf is file, the file name in the internal file node will be read. If the type is not file, the data will be directly read from the registration configuration center of the corresponding metadata of the configuration type. It is recommended to use nacos as the configuration registration center.
Start the sample by running account-server the Main functions of the, order-service, storage-service, and business-service applications separately.
After starting the sample, access the following URL through the GET method of HTTP to verify business-service the scenarios of calling other services through RestTemplate and FeignClient in respectively.
http://127.0.0.1:18081/seata/feign
http://127.0.0.1:18081/seata/rest
When a service interface is invoked, two types of returns are possible
In account-server the Controllers of, order-service, and storage-service services, the first logic to be executed is to output the Xid information in the RootContext. If the correct Xid information is output, that is, it changes every time. And that Xid of all the services in the same invocation are the same. Then it indicates that the passing and restoring of Seata's Xid is normal.
# View service operation logs separately (example)
Account Service ... xid: 192.168.44.1:8091:4540309594179612673
Order Service Begin ... xid: 192.168.44.1:8091:4540309594179612673
Storage Service Begin ... xid: 192.168.44.1:8091:4540309594179612673
...
Begin new global transaction [192.168.44.1:8091:4540309594179612673]
In this example, we simulate a scenario in which a user purchases goods. The Storage Service is responsible for deducting the inventory quantity, the Order Service is responsible for saving the order, and the Account service is responsible for deducting the balance of the user's account.
To demonstrate the sample, we use Random. NextBoolean () to randomly throw exceptions in Order Service and AccountService, simulating a scenario where exceptions randomly occur during service invocation.
If a distributed transaction is in effect, then the following equation should hold
# Verification example
SELECT * FROM account_tbl;
SELECT * FROM storage_tbl;
SELECT * FROM order_tbl;
Note: Since Random.nextBoolean() is used to randomly throw exceptions to simulate transaction exceptions, it is also necessary to verify whether distributed transactions can be rolled back correctly:
If exceptions are thrown in OrderService and AccountService, StorageService should roll back the inventory deduction, and the account balance should also be restored to its initial state.
View the distributed transaction logs: Check the undo_log table and global_table table to ensure that relevant records are deleted or restored during transaction rollback.
Service providers that provide services through Spring MVC can automatically restore the Seata context when they receive an HTTP request with Seata information in the header.
Support the automatic passing of the Seata context when the service caller invokes through the RestTemplate.
Support the automatic passing of the Seata context when the service caller calls through FeignClient.
Scenarios where SeataClient and Hystrix are used together are supported.
Scenarios where SeataClient and Sentinel are used together are supported.