In the previous article I went through setting up the Auto-SPM functionality, so turned the SPM to true “auto”. I then constructed some fairly obvious conditions for Automatic SQL Plan Management to operate.
And yes — it worked exactly as expected. The engine kicked in on its own, activated a clearly better SQL plan, and did so without any manual intervention. The choice was based on obvious metrics: lower CPU usage, fewer logical reads, and above all — shorter elapsed time.
As also noted, the cost calculated by the CBO was lower as well, yet it didn’t seem to be the deciding factor — or perhaps it wasn’t even taken into account.
Now, let’s take things a step further and see what happens when we complicate the test cases a bit — pushing Auto SPM to, let’s say, reach a compromise.
Short reminder –
I am working with fairly simple test SQL query based on a trivial table with 20 millions rows and with the index on ID column (the visibility of the index will be toggled according to my needs – sometimes the query will run with FTS, sometimes with Index Range Scan).
SQL> create table TSTUS.TESTASPM tablespace PSTS as
select rownum as ID, sysdate-mod(rownum,365) as mydate, rpad('X',32,'X') as padding
from dual connect by level <= 20e6 ;
Table created.
SQL> create index TSTUS.IDX_TESTASPM_ID on TSTUS.TESTASPM ( ID ) invisible ;
Index created.
The test SQL is also very simple. The only “complication” is, I will be resizing the range scan “shifting the ID to the right”, so that the index range scan will get calculated less and less attractive.
Essentially, the test SQL looks like that (note, the index visibility will be toggled according to the test requirements) :
test4b.sql
select /* <some comment to make sql_id unique> */ max(padding) from TSTUS.TESTASPM where ID <= &rather_big_number ;
alter index TSTUS.IDX_TESTASPM_ID &set_index2_vis_invisb ;
As in the previous post, I will enable SQL trace (aka 10046) for sql_id to look at certain figures, will flush buffer cache to obtain physical reads and will examine the output of “SET AUTOTRACE TRACE EXPLAIN STATISTICS”.
For details of the scripts and commands used, revert to the previous post and look for them at the top of the article.
Prepare test environment
I will use the SELECT from above with the predicate section encompassing 100k rows out of 20 millions in total.
First exec just to intercept the sql_id (it happens to be fdrcx2c2ms48x) in order to trace it with SQL trace :
SQL> select /* TESTASPM_LARGE_RANGE */ max(padding) from TSTUS.TESTASPM where ID <= 100e3;
MAX(PADDING)
--------------------------------
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1 row selected.
SQL_ID: fdrcx2c2ms48x
Enable SQL Trace on sql_id : fdrcx2c2ms48x :
15:44:06 SYS@DB23AICDB->MYPDB> alter system set events 'sql_trace [sql:fdrcx2c2ms48x] level 12';
System altered.
Test scenario
As previously, let me explain my intentions.
I executed the same query with:
1). full table scan
2). index access
The noted elapsed time of both will be very similar, with FTS getting the result slightly faster, 40ms and 100ms.
Now we’ll observe that the Auto-SPM had to “struggle” and apparently worked out some sort of compromise as the associated figures weren’t that obvious any longer.
Reminder: the range of IDs is this time significantly larger, so not just 20 as in the previous post, but 100k (out of 20 mln).
This obviously does not make the Index Range Scan as attractive.
In addition, I abandoned the idea of deliberately hindering the Full Table Scan and let it run with the full throttle, utilizing the smart-scan feature of Exadata.
Side note.
If by any chance you’re carrying out your own tests based on similar scenarios and do not run them on Exadata, I am sure you’ll be able to gain the same results if you follow this simple rule.
You’ll simply need to increase the range of IDs and move the slider to the right.
For example, with a total cardinality of 20 million, a predicate such as ID <= 5e6
may do the job — bringing the elapsed times for the Full Table Scan and the Index Range Scan closer together (although this still depends on your disk subsystem’s capabilities).
This is btw. closely related: if you’d like get in-depth understanding of how the Index Range Scan is costed and compared with the cost of full table scan, may I suggest this article.
Worth mentioning, the optimizer is not aware whether I hinder my FTS (e.g. fiddle with db_file_multiblock_read_count and/or toggle the cell_offload_processing to true/false) and calculates the cost figure independently.
So is the decision whether or not to apply the offloading in Exadata. It’s being taken after the CBO has calculated the cost (unless you’d explicitly disable offload with cell_offload_processing = false).
Meaning, provided the full table scan has been chosen by the CBO, this itself is still no guarantee of the smart to kick in and here are the details, if you’re curious.
OK – let’s get back to the proper tests.
Execution with Full Table Scan (~40 ms)
This time without index (the index has been set to INVISIBLE), so using full table scan.
As mentioned, I am running this test on Exadata machine and will unmercifully utilize its offload capabilities. That means (anticipating) with FTS + offload + Exadata Storage Index that run will report shorter elapsed time than the the Index Range Scan, although the CBO would choose to use the index.
Let’s see it in action (SQL Trace for sql_id : fdrcx2c2ms48x will be on, buffer cache flushed).
15:46:35 TSTUS @ MYPDB> @test4b
15:46:37 TSTUS @ MYPDB> -- alter session set cell_offload_processing = false ;
15:46:37 TSTUS @ MYPDB> -- alter session set db_file_multiblock_read_count = 1 ;
15:46:37 TSTUS @ MYPDB>
15:46:37 TSTUS @ MYPDB> alter index TSTUS.IDX_TESTASPM_ID &set_index2_vis_invisb ;
Enter value for set_index2_vis_invisb: INVISIBLE
old 1: alter index TSTUS.IDX_TESTASPM_ID &set_index2_vis_invisb
new 1: alter index TSTUS.IDX_TESTASPM_ID INVISIBLE
Index altered.
15:46:43 TSTUS @ MYPDB> @fb
15:46:43 TSTUS @ MYPDB> set echo on
15:46:43 TSTUS @ MYPDB> alter system flush buffer_cache ;
System altered.
15:46:43 TSTUS @ MYPDB> set echo off
15:46:43 TSTUS @ MYPDB> set feedback on sql_id
15:46:43 TSTUS @ MYPDB> set echo off
INST_ID SID SERIAL# OSPID PID TRACEFILE
------- ----------- ------- ---------- ---------- ----------------------------------------------------------------------------------------------------
1 2427 57440 258650 131 DB23AICDB_ora_258650.trc
15:46:43 TSTUS @ MYPDB> select /* TESTASPM_LARGE_RANGE */ max(padding) from TSTUS.TESTASPM where ID <= 100e3;
MAX(PADDING)
--------------------------------
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1 row selected.
SQL_ID: fdrcx2c2ms48x
Elapsed: 00:00:00.08
Let’s check the status of SPM (I had to wait several seconds for Capture Task to gather the first statistics – baseline).
The analysis of the raw SQL trace file (10046) as well as the xml tree retrieved from the column NOTES will follow – if in doubt, get back to the previous post.
It’s btw. worth to match the timestamps recorded in all subsequent (and previous) reports with the timestamps displayed directly at the SQL*Plus command prompt :
SYS@DB23AICDB->MYPDB> @lssqlsts
TASK_NAME INTERVAL STATUS LAST_SCHEDULE_TIME CURRENT_TIME
---------------------------------------------------------------- -------- -------------------- ------------------- -------------------
Auto STS Capture Task 120 SUCCEEDED 22-08-2025 15:47:06 22-08-2025 15:48:14
Elapsed: 00:00:00.00
PLAN_TIMESTAMP SQL_ID SQL_TEXT PARSING_SCHEMA_NAME ELAPSED_TIME CPU_TIME BUFFER_GETS DISK_READS EXECS PARSING_US FIRST_LOAD_TIME SQL_PLAN_BASELINE OPTIMIZER_COST
------------------- --------------- -------------------------------------------------- ------------------------------ ------------ ------------ ----------- -------------------- ------------ ---------- ------------------- ------------------------------ --------------
22-08-2025 15:46:43 fdrcx2c2ms48x select /* TESTASPM_LARGE_RANGE */ max(padding) fro TSTUS 83,788 57,471 144,498 144,332 1 TSTUS 2025-08-22/15:43:42 39438
m TSTUS.TESTASPM where ID <= 100e3
So, for the time being we have data for the single run of that SQL query (fdrcx2c2ms48x) and obviously nothing to compare with (it will come though).
For now, let us examine the raw 10046 trace for this execution and emphasize important sections.
In the “Diag Trace” directory we will find the trace file :
# grep fdrcx2c2ms48x *.trc
DB23AICDB_ora_258650.trc:PARSING IN CURSOR #139840530859208 len=84 dep=0 uid=134 oct=3 lid=134 tim=1694162496203 hv=87822621 ad='c69694f0' sqlid='fdrcx2c2ms48x'
Important excerpts of the raw 10046 from above (slightly re-edited, important sections marked):
382 PARSING IN CURSOR #139840530859208 len=84 dep=0 uid=134 oct=3 lid=134 tim=1694162496203 hv=87822621 ad='c69694f0' sqlid='fdrcx2c2ms48x'
383 select /* TESTASPM_LARGE_RANGE */ max(padding) from TSTUS.TESTASPM where ID <= 100e3
384 END OF STMT
385 EXEC #139840530859208:c=40,e=40,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=2834813234,tim=1694162496203
386 WAIT #139840530859208: nam='reliable message' ela=1574 channel context=4214713864 channel handle=7788711576 broadcast message=4216515320 obj#=11061 tim=1694162500070
387 WAIT #139840530859208: nam='Allocate PGA memory from OS' ela=10 Bytes=131072 where=1 p3=0 obj#=11061 tim=1694162500174
388 WAIT #139840530859208: nam='Allocate PGA memory from OS' ela=10 Bytes=196608 where=1 p3=0 obj#=11061 tim=1694162500600
389 WAIT #139840530859208: nam='Allocate UGA memory from OS' ela=8 Bytes=196608 where=1 p3=0 obj#=11061 tim=1694162500757
390 WAIT #139840530859208: nam='cell smart table scan' ela=170 cellhash#=976102048 p2=0 p3=0 obj#=82773 tim=1694162501285
391 WAIT #139840530859208: nam='cell disk open' ela=133 cellid#=2743020163 where=2 p3=0 obj#=82773 tim=1694162501588
392 WAIT #139840530859208: nam='cell smart table scan' ela=159 cellhash#=2743020163 p2=0 p3=0 obj#=82773 tim=1694162501945
393 WAIT #139840530859208: nam='cell disk open' ela=139 cellid#=2957023425 where=2 p3=0 obj#=82773 tim=1694162502239
394 WAIT #139840530859208: nam='cell smart table scan' ela=166 cellhash#=2957023425 p2=0 p3=0 obj#=82773 tim=1694162502558
395 WAIT #139840530859208: nam='cell disk open' ela=108 cellid#=2743020163 where=2 p3=0 obj#=82773 tim=1694162502804
..
..
447 WAIT #139840530859208: nam='cell smart table scan' ela=23 cellhash#=2957023425 p2=0 p3=0 obj#=82773 tim=1694162535448
448 WAIT #139840530859208: nam='cell smart table scan' ela=118 cellhash#=2957023425 p2=0 p3=0 obj#=82773 tim=1694162535866
449 FETCH #139840530859208:c=19743,e=40067,p=144284,cr=144288,cu=0,mis=0,r=1,dep=0,og=1,plh=2834813234,tim=1694162536335
450 STAT #139840530859208 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=144288 pr=144284 pw=0 str=1 time=40067 dr=144284 dw=0 us)'
451 STAT #139840530859208 id=2 cnt=74002 pid=1 pos=1 obj=82773 op='TABLE ACCESS STORAGE FULL TESTASPM (cr=144288 pr=144284 pw=0 str=1 time=35468 dr=144284 dw=0 us cost=39438 size=3900000 card=100000)'
Traditionally, some notes and the summary of this execution (remember? it represents the execution with the FTS).
Looking at the names of the WAITs, it is clear that the the offload indeed took place: "cell smart table scan"
.
Then comes the crucial: FETCH #139840530859208:c=19743,e=40067,p=144284,cr=144288,cu=0,mis=0,r=1.
So, the time spent on CPU = 19743 us, the total elapsed time = 40067 us, number of physical reads = 144284 (does not seem to be taken into account though) and last but not least: number of consistent reads = 144288 – and this one seems to play important role, although in case of the smart-scan and offload it’s hard to talk about LIOs.
It seems legitimate to refer to cr (LIOs) even if a query runs with full table scan and does accomplish block shipping (so it will bypass buffer cache and will utilize “direct path read” on non-Exadata platforms or “direct path read” possibly followed by “smart scan” on Exadata).
Firstly, the (cr) statistic is always populated in the 10046 trace, regardless of whether a query actually bypasses the buffer cache or truly uses it (that is, when data retrieval is performed from the cache). Therefore, it serves as a common reference point.
Secondly, the architects of SPM might have assumed — this is merely my educated guess — that if a query performs no physical I/Os, the CPU time will be lower, and conversely (let’s consider the extreme case for a moment), if a query satisfies all reads from the buffer cache, then the CPU time will accordingly be higher.
Finally, since the weighted dependency between all three metrics — CPU time, elapsed time, and the number of reported logical I/Os — is apparently taken into account when determining which baseline to apply, the CPU time and the true number of LIOs are correlated proportionally.
A quick add-on, as I guess this excerpt below (i.e. STAT section) may be underestimated.
When you examine the STAT-related data in the SQL Trace (or an output of tkprof), you uncover a veritable goldmine of diagnostic information.
Let’s take id=2 from above as an example : STAT #139840530859208 id=2 cnt=74002 pid=1 pos=1 obj=82773 op='TABLE ACCESS STORAGE FULL TESTASPM (cr=144288 pr=144284 pw=0 str=1 time=35468 dr=144284 dw=0 us cost=39438 size=3900000 card=100000)'
– such STAT Plan ID is not only showing SQL Plan with its operation type, etc., but it’s also very detailed report of each SQL Plan ID, including true cardinality of each Plan Id (see: cnt, but beware: not when offload took place! – this unfortunately also holds true for gather_plan_statistics), time elapsed per Id, number of PIOs, LIOs, etc – sort of output of SQL hinted with gather_plan_statistics or SQL Monitor on steroids, but licence-free 🙂
Some of these somewhat cryptic symbols (for example: dr) are demystified once looking at the tkprof version – so dr becomes direct read, dw = direct writes. On the other hand pr stays pr in tkprof (physical reads, possibly cached).
You may even apply some logical playing when looking at these numbers, so for example pr = dr and dr <> 0 almost certainly means smart scan on Exadata.
Even more universal: pr = 0 (physical reads = 0) and cr <> 0 (LIOs <>0) implies that cpu time/Plan ID covers the majority of the total ela time/Plan ID = (note, cpu time is not explicitly reported per Plan ID), as all reads were satisfied from the buffer cache, so hammering cpu.
Summary FTS :
ela: 40ms
cpu: 20ms (19743 us)
cons-reads: 144288
Execution with Index Range Scan (~100 ms)
OK, let’s get to the second execution of the same query (and same sql_id), this time though with Index Range Scan :
15:48:59 TSTUS @ MYPDB> @test4b
15:49:02 TSTUS @ MYPDB> -- alter session set cell_offload_processing = false ;
15:49:02 TSTUS @ MYPDB> -- alter session set db_file_multiblock_read_count = 1 ;
15:49:02 TSTUS @ MYPDB>
15:49:02 TSTUS @ MYPDB> alter index TSTUS.IDX_TESTASPM_ID &set_index2_vis_invisb ;
Enter value for set_index2_vis_invisb: VISIBLE
old 1: alter index TSTUS.IDX_TESTASPM_ID &set_index2_vis_invisb
new 1: alter index TSTUS.IDX_TESTASPM_ID VISIBLE
Index altered.
15:49:08 TSTUS @ MYPDB>
15:49:08 TSTUS @ MYPDB> @fb
15:49:08 TSTUS @ MYPDB> set echo on
15:49:08 TSTUS @ MYPDB> alter system flush buffer_cache ;
System altered.
15:49:08 TSTUS @ MYPDB> set feedback on sql_id
INST_ID SID SERIAL# OSPID PID TRACEFILE
------- ----------- ------- ---------- ---------- ----------------------------------------------------------------------------------------------------
1 1960 26052 271408 145 DB23AICDB_ora_271408.trc
15:49:08 TSTUS @ MYPDB> select /* TESTASPM_LARGE_RANGE */ max(padding) from TSTUS.TESTASPM where ID <= 100e3;
MAX(PADDING)
--------------------------------
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
1 row selected.
SQL_ID: fdrcx2c2ms48x
Elapsed: 00:00:00.19
In the meantime we got two SQL trace files :
10046 trace files for both execs :
# grep fdrcx2c2ms48x *.trc
1.
DB23AICDB_ora_258650.trc:PARSING IN CURSOR #139840530859208 len=84 dep=0 uid=134 oct=3 lid=134 tim=1694162496203 hv=87822621 ad='c69694f0' sqlid='fdrcx2c2ms48x'
2.
DB23AICDB_ora_271408.trc: value="<notes><fg_spm><sql_id>fdrcx2c2ms48x</sql_id><test_phv>522503748</test_phv><ref_phv>2920173004</ref_phv><test_ofe>23.1.0</test_ofe><ref_ofe>23.1.0</ref_ofe><ver>normal</ver><rfp>1</rfp><rep>1</rep><osfe>1</osfe><oln>0</oln><hfe>0</hfe><tosfe>3</tosfe><tol"...
DB23AICDB_ora_271408.trc:PARSING IN CURSOR #140714085758152 len=84 dep=0 uid=134 oct=3 lid=134 tim=1694307518484 hv=87822621 ad='c69694f0' sqlid='fdrcx2c2ms48x'
DB23AICDB_ora_271408.trc: value="<notes><fg_spm><sql_id>fdrcx2c2ms48x</sql_id><test_phv>522503748</test_phv><ref_phv>2920173004</ref_phv><test_ofe>23.1.0</test_ofe><ref_ofe>23.1.0</ref_ofe><ver>normal</ver><rfp>1</rfp><rep>1</rep><osfe>1</osfe><oln>0</oln><hfe>0</hfe><tosfe>3</tosfe><tol"...
As above, let’s be fair and examine the raw trace representing the Index Range Scan (just sections meaningful to us) :
2077 PARSING IN CURSOR #140714085758152 len=84 dep=0 uid=134 oct=3 lid=134 tim=1694307518484 hv=87822621 ad='c69694f0' sqlid='fdrcx2c2ms48x'
2078 select /* TESTASPM_LARGE_RANGE */ max(padding) from TSTUS.TESTASPM where ID <= 100e3
2079 END OF STMT
2080 EXEC #140714085758152:c=307,e=68,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3639302403,tim=1694307518481
2081 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=57 cellhash#=2957023425 diskhash#=0 bytes=8192 obj#=82787 tim=1694307518665
2082 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=31 cellhash#=2743020163 diskhash#=0 bytes=8192 obj#=82787 tim=1694307518791
2083 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=28 cellhash#=2957023425 diskhash#=0 bytes=8192 obj#=82787 tim=1694307518875
2084 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=29 cellhash#=976102048 diskhash#=0 bytes=8192 obj#=82773 tim=1694307518945
..
..
3011 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=24 cellhash#=2743020163 diskhash#=0 bytes=8192 obj#=82773 tim=1694307618913
3012 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=40 cellhash#=2743020163 diskhash#=0 bytes=8192 obj#=82773 tim=1694307619030
3013 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=25 cellhash#=2743020163 diskhash#=0 bytes=8192 obj#=82773 tim=1694307619138
3014 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=22 cellhash#=2957023425 diskhash#=0 bytes=8192 obj#=82787 tim=1694307619193
3015 WAIT #140714085758152: nam='cell single block physical read: RDMA' ela=23 cellhash#=2743020163 diskhash#=0 bytes=8192 obj#=82773 tim=1694307619290
3016 FETCH #140714085758152:c=99944,e=100774,p=934,cr=1026,cu=0,mis=0,r=1,dep=0,og=1,plh=3639302403,tim=1694307619323
Summary Index Range Scan :
ela: 100ms
cpu: 100ms (99944 us)
cons-reads: 1026
FETCH:c=99944,e=100774,p=934,cr=1026,cu=0,mis=0,r=1,dep=0,og=1,plh=3639302403
Summary Index Range Scan : | Summary FTS : |
ela: 100ms | ela: 40ms |
cpu: 100ms (99944 us) | cpu: 20ms (19743 us) |
cons-reads: 1026 | cons-reads: 144288 |
Before we’ll jump to the dump of the xml tree from NOTES and try to match the above numbers (see previous post), let’s take a look at the status after auto-optimization :
TASK_NAME INTERVAL STATUS LAST_SCHEDULE_TIME CURRENT_TIME
---------------------------------------------------------------- -------- -------------------- ------------------- -------------------
Auto STS Capture Task 120 SUCCEEDED 22-08-2025 15:49:06 22-08-2025 15:50:14
PLAN_TIMESTAMP SQL_ID SQL_TEXT PARSING_SCHEMA_NAME ELAPSED_TIME CPU_TIME BUFFER_GETS DISK_READS EXECS PARSING_US FIRST_LOAD_TIME SQL_PLAN_BASELINE OPTIMIZER_COST
------------------- --------------- -------------------------------------------------- ------------------------------ ------------ ------------ ----------- -------------------- ------------ ---------- ------------------- ------------------------------ --------------
22-08-2025 15:46:43 fdrcx2c2ms48x select /* TESTASPM_LARGE_RANGE */ max(padding) fro TSTUS 83,788 57,471 144,498 144,332 1 TSTUS 2025-08-22/15:43:42 39438
m TSTUS.TESTASPM where ID <= 100e3
SIGNATURE SQL_TEXT PLAN_NAME CREATED ENABLED ACC REP OPTIMIZER_COST ELAPSED_TIME CPU_TIME BUFFER_GETS FOREGROUND_LAST_VER
------------------------- -------------------------------------------------- ------------------------------ ------------------- -------------------- --- --- -------------- ------------ ------------ ----------- -------------------
6472210033812658637 select /* TESTASPM_LARGE_RANGE */ max(padding) fro SQL_PLAN_5mng8rh4jyqfd1f24c644 22-08-2025 15:49:08 YES YES YES 1060 100,888 100,292 1,026 22-08-2025 15:49:08
m TSTUS.TESTASPM where ID <= 100e3
First spot?
The SQL was indeed auto-optimized (the both executions were evaluated and the final baseline selected and auto-accepted).
To my surprise, even though the FTS ran shorter (let’s admit though: both were really fast, in the range of milliseconds), the Index Range Scan version of the SQL Plan was chosen.
The only (huge) difference in favor of Index Scan is the reported number of logical reads (cr = consistent-reads from the 10046 trace).
Let us reach out again to the sqlobj$auxdata and its NOTES column with the xml tree storing the above data (again, if in doubt what I am referring to, visit the previous post).
For immediate reference, the both FETCHes from the respective SQL Traces right underneath the corresponding xml branches:
# xmllint --format notes_mess.xml -o notes_fine.xml
# cat notes_mess.xml
<notes><fg_spm><sql_id>fdrcx2c2ms48x</sql_id><test_phv>522503748</test_phv><ref_phv>2920173004</ref_phv><test_ofe>23.1.0</test_ofe><ref_ofe>23.1.0</ref_ofe><ver>normal</ver><rfp>1</rfp><rep>1</rep><osfe>1</osfe><oln>0</oln><hfe>0</hfe><tosfe>3</tosfe><toln>0</toln><thfe>0</thfe><comp_time>2025-08-22/15:49:08</comp_time><ref_hp><cpu>37</cpu><bg>210</bg></ref_hp><test_hp><cpu>100</cpu><bg>407</bg></test_hp><ref_perf><bg>144288</bg><cpu>20</cpu><elps>83</elps></ref_perf><test_perf><bg>1026</bg><cpu>100</cpu><elps>100</elps></test_perf><status>better</status><ver_time>2025-08-22/15:49:08</ver_time></fg_spm></notes>
# cat notes_fine.xml
<?xml version="1.0"?>
<notes>
<fg_spm>
<sql_id>fdrcx2c2ms48x</sql_id>
<test_phv>522503748</test_phv>
<ref_phv>2920173004</ref_phv>
<test_ofe>23.1.0</test_ofe>
<ref_ofe>23.1.0</ref_ofe>
<ver>normal</ver>
<rfp>1</rfp>
<rep>1</rep>
<osfe>1</osfe>
<oln>0</oln>
<hfe>0</hfe>
<tosfe>3</tosfe>
<toln>0</toln>
<thfe>0</thfe>
<comp_time>2025-08-22/15:49:08</comp_time>
<ref_hp>
<cpu>37</cpu>
<bg>210</bg>
</ref_hp>
<test_hp>
<cpu>100</cpu>
<bg>407</bg>
</test_hp>
<ref_perf>
<bg>144288</bg>
<cpu>20</cpu>
<elps>83</elps>
</ref_perf>
FETCH (FTS) #139840530859208:c=19743,e=40067,p=144284,cr=144288,cu=0,mis=0,r=1,dep=0,og=1,plh=2834813234
<test_perf>
<bg>1026</bg>
<cpu>100</cpu>
<elps>100</elps>
</test_perf>
FETCH (Index Range Scan) #140714085758152:c=99944,e=100774,p=934,cr=1026,cu=0,mis=0,r=1,dep=0,og=1,plh=3639302403
<status>better</status>
<ver_time>2025-08-22/15:49:08</ver_time>
</fg_spm>
</notes>
Reminder – “test_perf” surely stands for tested performance of Index Range Scan SQL Plan against “ref_perf”, which is certainly the referenced performance of SQL Plan with FTS.
Indeed, the Index Range Scan was chosen (status: better
), although it was slower (yet both were fast enough). What seems crucial: it involved much less resources.
What I didn’t get through, this time the elps (total elapsed) stored in the xml branch for FTS was nearly doubled when compared to the data from 10046 trace. The rest of data very closely matches with LIOs (cr) accurate match.
Let us apply much broader range: 10% (predicate: ID <= 2e6).
Reach out for conclusions and summary: part 3.
Leave a Reply