Explain Analyze#

Explain analyze和explain语句不同,explain analyze会真正执行查询,并得到查询执行过程中的统计数据。explain analyze的结果对了解查询执行的具体情况以及了解查询性能问题产生的原因有很大帮助。

Explain analyze相对于explain还会针对每一个查询计划节点,都会输出以下信息:

  • 平均产生的行数

    例如,Append-only Scan (orders)节点,“Avg 750000.0 rows x 2 workers”表示有2个QE,平均每个QE输出了750000行。注意这个是每个节点的输出行数,而不是该节点处理的行数。输出行数和处理行数是不同的,比如,对Scan节点,如果有过滤条件的话,该节点处理的行数可能远大于输出行数。

  • 产生最多行数(Max)和最后完成(Last)的virtual segment信息

    例如,对Append-only Scan (orders)节点,下面的输出表示输出最大行数的virtual segment是”seg1:changlei.local”,而最后完成执行的virtual segment是”seg0:changlei.local”。其中”changlei.local”是主机名。其中seg1和seg0输出的行数都为750000行。他们分别使用了4.773ms和4.987ms生成了第一行,166ms和171ms产生所有行。其中start offset表示一个查询从master节点开始派遣到该节点开始执行的时间。

 1                                                           QUERY PLAN
 2
 3--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 4 Gather Motion 2:1  (slice2; segments: 2)  (cost=291580.96..318527.67 rows=1230576 width=16)
 5   Rows out:  Avg 1500000.0 rows x 1 workers at destination.  Max/Last(seg-1:changlei.local/seg-1:changlei.local) 1500000/1500000 rows with 2209/2209 ms to first row, 2577/2577 ms to end, start offset by 1.429/1.429 ms.
 6   ->  HashAggregate  (cost=291580.96..318527.67 rows=615288 width=16)
 7         Group By: lineitem.l_orderkey
 8         Rows out:  Avg 750000.0 rows x 2 workers.  Max/Last(seg1:changlei.local/seg1:changlei.local) 750000/750000 rows with 2243/2243 ms to first row, 2498/2498 ms to end, start offset by 2.615/2.615 ms.
 9         Executor memory:  56282K bytes avg, 56282K bytes max (seg1:changlei.local).
10         ->  Hash Join  (cost=70069.00..250010.38 rows=3000608 width=15)
11               Hash Cond: lineitem.l_orderkey = orders.o_orderkey
12               Rows out:  Avg 3000607.5 rows x 2 workers.  Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/2999915 rows with 350/350 ms to first row, 1611/1645 ms to end, start offset by 3.819/3.816 ms.
13               Executor memory:  49153K bytes avg, 49153K bytes max (seg1:changlei.local).
14               Work_mem used:  23438K bytes avg, 23438K bytes max (seg1:changlei.local). Workfile: (0 spilling, 0 reused)
15               (seg0)   Hash chain length 1.7 avg, 3 max, using 434205 of 524341 buckets.
16               ->  Append-only Scan on lineitem  (cost=0.00..89923.15 rows=3000608 width=15)
17                     Rows out:  Avg 3000607.5 rows x 2 workers.  Max/Last(seg0:changlei.local/seg1:changlei.local) 3001300/2999915 rows with 4.460/4.757 ms to first row, 546/581 ms to end, start offset by 350/349 ms.
18               ->  Hash  (cost=51319.00..51319.00 rows=750000 width=8)
19                     Rows in:  Avg 750000.0 rows x 2 workers.  Max/Last(seg1:changlei.local/seg0:changlei.local) 750000/750000 rows with 341/344 ms to end, start offset by 8.114/5.610 ms.
20                     ->  Redistribute Motion 2:2  (slice1; segments: 2)  (cost=0.00..51319.00 rows=750000 width=8)
21                           Hash Key: orders.o_orderkey
22                           Rows out:  Avg 750000.0 rows x 2 workers at destination.  Max/Last(seg1:changlei.local/seg0:changlei.local) 750000/750000 rows with 0.052/2.461 ms to first row, 207/207 ms to end, start offset by 8.114/5.611 ms.
23                           ->  Append-only Scan on orders  (cost=0.00..21319.00 rows=750000 width=8)
24                                 Rows out:  Avg 750000.0 rows x 2 workers.  Max/Last(seg1:changlei.local/seg0:changlei.local) 750000/750000 rows with 4.773/4.987 ms to first row, 166/171 ms to end, start offset by 2.911/2.697 ms.
25 Slice statistics:
26   (slice0)    Executor memory: 281K bytes.
27   (slice1)    Executor memory: 319K bytes avg x 2 workers, 319K bytes max (seg1:changlei.local).
28   (slice2)    Executor memory: 105773K bytes avg x 2 workers, 105773K bytes max (seg1:changlei.local).  Work_mem: 23438K bytes max.
29 Statement statistics:
30   Memory used: 262144K bytes
31 Settings:  default_hash_table_bucket_number=2
32 Dispatcher statistics:
33   executors used(total/cached/new connection): (4/4/0); dispatcher time(total/connection/dispatch data): (0.171 ms/0.000 ms/0.048 ms).
34   dispatch data time(max/min/avg): (0.030 ms/0.004 ms/0.011 ms); consume executor data time(max/min/avg): (0.014 ms/0.009 ms/0.011 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
35 Data locality statistics:
36   data locality ratio: 1.000; virtual segment number: 2; different host number: 1; virtual segment number per host(avg/min/max): (2/2/2); segment size(avg/min/max): (593580028.000 B/593495232 B/593664824 B); segment size with penalty(avg/min/max): (103526336.000 B/103517072 B/103535600 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 0.094 ms; resource allocation: 4.503 ms; datalocality calculation: 1.493 ms.
37 Total runtime: 2660.146 ms
38(34 rows)

除了针对每个查询计划节点输出的信息,explain analyze还输出关于Dispatcher以及数据局部性(Data locality)方面的一些统计信息。关于数据局部性的统计信息的解释如下:

  • data locality ratio: 数据局部性的重要指标,指的是本地读的比例,越高表示数据局部性越好,如果数据局部性比较差,通常意味着一些问题,比如HDFS本地读配置没配好,产生了过多的远程读

  • virtual segment number: 执行该查询使用的virtual segment数,virtual segment 数决定了查询的并行度

  • different host number: 执行该查询用到的节点数,每个节点上可以起多个virtual segment,所以节点数一定小于或等于virtual segment数。OushuDB 会尝试均匀地把virtual segment分配到每个节点上,如果对一个大的查询,用到的节点数小于集群节点的总数,通常意味着某些节点出现了故障

  • virtual segment number per host(avg/min/max): 每个节点上的 virtual segment 数的平均,最小和最大值

  • segment size(avg/min/max): 每个virtual segment上读取的数据量平均,最小和最大值

  • segment size with penalty(avg/min/max): 每个virtual segment上处理的包含远程读取的数据量,其中远程读的大小计算方式为:“net_disk_ratio” * block size。其中net_disk_ratio表示远程读和本地读估计代价的比值,默认值为1.01

  • DFS metadatacache: DFS metadatacache 访问时间

  • resource allocation: 申请资源花费时间

  • datalocality calculation: 运行把HDFS块分配到virtual segment的算法以及计算数据局部性花费的时间