快捷搜索:  汽车  科技

mysql技术教学:MySQL最佳搭档percona工具分享

mysql技术教学:MySQL最佳搭档percona工具分享大多数MySQL数据库运维人员或者开发人员都会有这个需求.那就是要归档数据.尤其是一些日志数据.大家的选择无非就是在设计初期就使用分区表.要不然就是写存储过程和event进行定期归档.在这里我推荐大家使用pt-archiver这个工具进行归档.以上就是关于pt-show-grants的介绍使用pt-show-grants在这里我们可以把语句导入到一个文件中:查看输出获取某个用户的创建及授权语句:

一 背景

percona toolkit是percona公司开源出来 适应DBA各种场景运维数据库的工具包 今天我将主要分享两个.后边还会继续分享percona工具包内的工具.

二 pt-show-grants介绍

这款工具主要是可以获取数据库内的用户信息.包括创建及授权语句.为什么会分享这个工具呢?不知道大家有没有这个需求.比如你从亚马逊云数据库搭建一个自己IDC的灾备服务器.那么你就需要从亚马逊云上把数据拉下来.亚马逊不像阿里云提供xtrabackup的物理备份.亚马逊只能使用逻辑备份恢复.就很xxx.那么逻辑备份恢复之后我们就需要手动的创建数据库内的用户.如果一个一个去show create user和show grants就很麻烦.那么pt-show-grants就为我们提供了这个快捷的创建及授权方式

参数:

Usage: pt-show-grants [OPTIONS] [DSN] Options: --ask-pass Prompt for a password when connecting to MySQL --charset=s -A Default character set --config=A Read this comma-separated list of config files; if specified this must be the first option on the command line --database=s -D The database to use for the connection --defaults-file=s -F Only read mysql options from the given file --drop Add DROP USER before each user in the output --flush Add FLUSH PRIVILEGES after output --[no]header Print dump header (default yes) --help Show help and exit --host=s -h Connect to host --ignore=a Ignore this comma-separated list of users --[no]include-unused-roles When dumping MySQL 8 roles include unused roles --only=a Only show grants for this comma-separated list of users --password=s -p Password to use when connecting --pid=s Create the given PID file --port=i -P Port number to use for connection --revoke Add REVOKE statements for each GRANT statement --separate List each GRANT or REVOKE separately --set-vars=A Set the MySQL variables in this comma-separated list of variable=value pairs --socket=s -S Socket file to use for connection --[no]timestamp Add timestamp to the dump header (default yes) --user=s -u User for login if not current user --version Show version and exit Option types: s=string i=integer f=float h/H/a/A=comma-separated list d=DSN z=size m=time Rules: This tool accepts additional command-line arguments. Refer to the SYNOPSIS and usage information for details. DSN syntax is key=value[ key=value...] Allowable DSN keys: KEY COPY MEANING === ==== ============================================= A yes Default character set D yes Default database F yes Only read default options from the given file P yes Port number to use for connection S yes Socket file to use for connection h yes Connect to host p yes Password to use when connecting u yes User for login if not current user If the DSN is a bareword the word is treated as the 'h' key. Options and values after processing arguments: --ask-pass FALSE --charset (No value) --config /etc/percona-toolkit/percona-toolkit.conf /etc/percona-toolkit/pt-show-grants.conf /root/.percona-toolkit.conf /root/.pt-show-grants.conf --database (No value) --defaults-file (No value) --drop FALSE --flush FALSE --header TRUE --help TRUE --host (No value) --ignore (No value) --include-unused-roles FALSE --only (No value) --password (No value) --pid (No value) --port (No value) --revoke FALSE --separate FALSE --set-vars --socket (No value) --timestamp TRUE --user (No value) --version FALSE

参数我觉得也不需要我去多介绍什么.基本有连接参数就够了

使用pt-show-grants

mysql技术教学:MySQL最佳搭档percona工具分享(1)

在这里我们可以把语句导入到一个文件中:

查看输出

mysql技术教学:MySQL最佳搭档percona工具分享(2)

获取某个用户的创建及授权语句:

mysql技术教学:MySQL最佳搭档percona工具分享(3)

以上就是关于pt-show-grants的介绍

三 pt-archiver 介绍

大多数MySQL数据库运维人员或者开发人员都会有这个需求.那就是要归档数据.尤其是一些日志数据.大家的选择无非就是在设计初期就使用分区表.要不然就是写存储过程和event进行定期归档.在这里我推荐大家使用pt-archiver这个工具进行归档.

那么pt-archiver能实现什么呢?

1.按照过滤条件,将线上数据导出成归档文件

2.按照过滤条件,清理线上过期的历史数据

3.按照过滤条件,清理过期数据,并把数据归档到本地归档表,或者远端归档服务器的历史表。

pt-archiver参数介绍:

Usage: pt-archiver [OPTIONS] --source DSN --where WHERE Options: --analyze=s Run ANALYZE TABLE afterwards on --source and/or -- dest --ascend-first Ascend only first column of index --ask-pass Prompt for a password when connecting to MySQL --buffer Buffer output to --file and flush at commit --bulk-delete Delete each chunk with a single statement ( implies --commit-each) --[no]bulk-delete-limit Add --limit to --bulk-delete statement (default yes) --bulk-insert Insert each chunk with LOAD DATA INFILE (implies -- bulk-delete --commit-each) --channel=s Channel name used when connected to a server using replication channels --charset=s -A Default character set --[no]check-charset Ensure connection and table character sets are the same (default yes) --[no]check-columns Ensure --source and --dest have same columns ( default yes) --check-interval=m If --check-slave-lag is given this defines how long the tool pauses each time it discovers that a slave is lagging (default 1s). Optional suffix s= seconds m=minutes h=hours d=days; if no suffix s is used. --check-slave-lag=s Pause archiving until the specified DSN's slave lag is less than --max-lag --columns=a -c Comma-separated list of columns to archive --commit-each Commit each set of fetched and archived rows ( disables --txn-size) --config=A Read this comma-separated list of config files; if specified this must be the first option on the command line --database=s -D Connect to this database --delayed-insert Add the DELAYED modifier to INSERT statements --dest=d DSN specifying the table to archive to --dry-run Print queries and exit without doing anything --file=s File to archive to with DATE_FORMAT()-like formatting --for-update Adds the FOR UPDATE modifier to SELECT statements --header Print column header at top of --file --help Show help and exit --high-priority-select Adds the HIGH_PRIORITY modifier to SELECT statements --host=s -h Connect to host --ignore Use IGNORE for INSERT statements --limit=i Number of rows to fetch and archive per statement ( default 1) --local Do not write OPTIMIZE or ANALYZE queries to binlog --low-priority-delete Adds the LOW_PRIORITY modifier to DELETE statements --low-priority-insert Adds the LOW_PRIORITY modifier to INSERT or REPLACE statements --max-flow-ctl=f Somewhat similar to --max-lag but for PXC clusters --max-lag=m Pause archiving if the slave given by --check- slave-lag lags (default 1s). Optional suffix s= seconds m=minutes h=hours d=days; if no suffix s is used. --no-ascend Do not use ascending index optimization --no-delete Do not delete archived rows --optimize=s Run OPTIMIZE TABLE afterwards on --source and/or -- dest --output-format=s Used with --file to specify the output format --password=s -p Password to use when connecting --pid=s Create the given PID file --plugin=s Perl module name to use as a generic plugin --port=i -P Port number to use for connection --primary-key-only Primary key columns only --progress=i Print progress information every X rows --purge Purge instead of archiving; allows omitting --file and --dest --quick-delete Adds the QUICK modifier to DELETE statements --quiet -q Do not print any output such as for --statistics --replace Causes INSERTs into --dest to be written as REPLACE --retries=i Number of retries per timeout or deadlock (default 1) --run-time=m Time to run before exiting. Optional suffix s= seconds m=minutes h=hours d=days; if no suffix s is used. --[no]safe-auto-increment Do not archive row with max AUTO_INCREMENT ( default yes) --sentinel=s Exit if this file exists (default /tmp/pt-archiver- sentinel) --set-vars=A Set the MySQL variables in this comma-separated list of variable=value pairs --share-lock Adds the LOCK IN SHARE MODE modifier to SELECT statements --skip-foreign-key-checks Disables foreign key checks with SET FOREIGN_KEY_CHECKS=0 --slave-password=s Sets the password to be used to connect to the slaves --slave-user=s Sets the user to be used to connect to the slaves --sleep=i Sleep time between fetches --sleep-coef=f Calculate --sleep as a multiple of the last SELECT time --socket=s -S Socket file to use for connection --source=d DSN specifying the table to archive from (required) --statistics Collect and print timing statistics --stop Stop running instances by creating the sentinel file --txn-size=i Number of rows per transaction (default 1) --user=s -u User for login if not current user --version Show version and exit --[no]version-check Check for the latest version of Percona Toolkit MySQL and other programs (default yes) --where=s WHERE clause to limit which rows to archive ( required) --why-quit Print reason for exiting unless rows exhausted Option types: s=string i=integer f=float h/H/a/A=comma-separated list d=DSN z=size m=time Rules: Specify at least one of --dest --file or --purge. --ignore and --replace are mutually exclusive. --txn-size and --commit-each are mutually exclusive. --low-priority-insert and --delayed-insert are mutually exclusive. --share-lock and --for-update are mutually exclusive. --analyze and --optimize are mutually exclusive. --no-ascend and --no-delete are mutually exclusive. DSN values in --dest default to values from --source if COPY is yes. DSN syntax is key=value[ key=value...] Allowable DSN keys: KEY COPY MEANING === ==== ============================================= A yes Default character set D yes Database that contains the table F yes Only read default options from the given file L yes Explicitly enable LOAD DATA LOCAL INFILE P yes Port number to use for connection S yes Socket file to use for connection a no Database to USE when executing queries b no If true disable binlog with SQL_LOG_BIN h yes Connect to host i yes Index to use m no Plugin module name p yes Password to use when connecting t yes Table to archive from/to u yes User for login if not current user If the DSN is a bareword the word is treated as the 'h' key.

主要参数介绍:

-where 'id<10000' 设置操作条件 --limit 10000 每次取1000行数据给pt-archive处理 --txn-size 1000 设置1000行为一个事务提交一次 --progress 5000 每处理5000行输出一次处理信息 --statistics 结束的时候给出统计信息:开始的时间点,结束的时间点,查询的行数,归档的行数,删除的行数,以及各个阶段消耗的总的时间和比例,便于以此进行优化。只要不加上--quiet,默认情况下pt-archive都会输出执行过程的 --charset=UTF8 指定字符集为UTF8,字符集需要对应当前库的字符集来操作 --no-delete 表示不删除原来的数据,注意:如果不指定此参数,所有处理完成后,都会清理原表中的数据 --bulk-delete 批量删除source上的旧数据 --bulk-insert 批量插入数据到dest主机 (通过代码查看获知 pt-archiver是使用LOAD DATA LOCAL INFILE导入数据的) --analyze=ds 归档表和原表都会执行analyze table收集统计信息 可选值有d代表归档表 s代表源表 --optimize=ds 和analyze一样原理. 可选值也是d s --dry-run 模拟执行 --source 源数据 --dest 目标数据

再重要说明一下参数--where :如果是id的话无所谓.如果使用时间的话 它的脚本里默认的就是时间戳格式.

pt-archiver的使用注意:

数据库需要配置好两个参数:

secure-file-priv=/tmp 配置好这个 目录自己定义 [client] loose-local-infile=1或者 set global local_infile = 1;

pt-archiver的使用:

mysql技术教学:MySQL最佳搭档percona工具分享(4)

归档主键小于200000万的数据到bak表并不删除原表数据

pt-archiver --source h=10.10.119.63 P=3307 u=admin p='xxxx' D=test t=xxx_bak1 --dest h=10.10.119.63 P=3307 u=admin p='xxxx' D=test t=xxx_bak --charset=utf8 --where='pkid<200000' --progress 10000 --limit=10000 --txn-size 10000 --bulk-insert --no-delete --statistics

解释:

归档源库test下的xxxbak1表中pkid小于200000的数据到目标库test下的bak表中 字符集设置为utf8(这里注意mysql8.0中的默认字符集是utf8mb4 我使用的这个版本的不支持这个字符集) 每次取一万行给pt-archiver处理.并每一万行输出一次信息.事务大小也是一万行 同时使用load data插入目标表中 不删除源表数据

查看归档:

mysql技术教学:MySQL最佳搭档percona工具分享(5)

这里显示10万行.是因为在之前我做过归档并删除了前十万行.所以这里只剩下十万行了.

二 根据时间归档数据

pt-archiver --source h=10.10.119.63 P=3307 u=admin p='test' D=test t=xxx_bak1 --dest h=10.10.119.63 P=3307 u=admin p='test' D=test t=xxx_bak --charset=UTF8 --where='create_time < current_date - interval 305 day' --progress 10000 --limit=10000 --txn-size 10000 --bulk-insert --no-delete

归档305天前的数据

mysql技术教学:MySQL最佳搭档percona工具分享(6)

以上就是关于pt-archiver归档的使用.

四 总结

这篇文章主要分享了pt工具包中的两个pt-show-grants和pt-archiver.

猜您喜欢: