Benchmarking Oracle DML: A Case Study II – Effects of Indexes

This is the second part of a two-part article. The first part, Benchmarking Oracle DML: A Case Study I – Update vs Merge, An Example, compares an Update and a Merge statement for performance in updating a table involving a subquery, in the absence of indexes. The first part describes the problem and the mechanism for generating parameterised test data.

In this second part, we are interested in the effects on performance of indexes for DML statements that affect a large proportion of the table. To that end, we take as data sets the 1-dimensional ‘shallow slice’ of data set points from part 1 where the updates apply to about half of the total records. We’ll run the statements in the presence of: (i) No indexes; (ii) product id index only; (iii) product id and sales date indexes. Note that the only updated column is sales date.

The idea behind the analysis is of course that when performing a large batch DML we may be able to drop the indexes first, then recreate them after the DML, depending on our environment. Obviously, if we save time on the DML this will be offset to some extent by the need to recreate the indexes. Therefore we will also time the index creations, and for good measure we’ll include a timing of the well-known CTAS approach for bulk updates, where a new table is created by selecting from the table to be updated, and then the old table dropped and the new one renamed.

Tom Kyte discusses issues around this kind of bulk update in a 2014 Oracle Magazine article (referenced also in part 1 of this current article) On Table Updates and SQL Plan Baselines. He notes, in particular, that the CTAS approach benefits from avoiding undo creation.

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

DML and DDL Statements

In this part 2 we have two groups to test: The first is DML, including the update from part 1, and adding an insert and a delete statement. The group actually includes the three versions of the merge from part 1, but as those were always slower than the update, we’ll exclude them from the article.

The second group has the two create index statements and the Create Table As Select. We can add timings from this group to the DML timings to compare DML in the presence of one or both indexes, with doing it without indexes, then recreating afterwards. We can also compare the update approaches with the CTAS approach with index creation added in to its timings.

DML Statements (Group DMLSALES)

In addition to the SQL code below, there is also a condition ‘WHERE 1=1’ added to all the statements except the index creations, which is a placeholder with the framework package replacing the ‘1’s with the formatted timestamp mentioned in part 1.

Update (UPD_DML)

This statement updates the records with minimum date by product with the hard-coded minimum date.

UPDATE product_sales sd
   SET sd.sales_date = DATE '1900-01-01'
 WHERE 1=1 AND sd.sales_date = ( 
   SELECT Min(sd2.sales_date)
     FROM product_sales sd2
    WHERE sd.product_id = sd2.product_id   
 )
   AND sd.sales_date != DATE '1900-01-01'

Insert (INS_DML)

This statement selects the records that the update updates and re-inserts them with the hard-coded minimum date replacing the minimums by product.

INSERT INTO product_sales
WITH date_mins AS (
    SELECT product_id
      FROM product_sales
     GROUP BY product_id
     HAVING Min(sales_date) != DATE '1900-01-01'
)
SELECT product_id, DATE '1900-01-01'
  FROM date_mins

Delete (DEL_DML)

This statement deletes the records where the update updated them.

DELETE product_sales sd
  WHERE 1=1 AND (product_id, sales_date) IN (
    SELECT product_id, Min(sales_date)
      FROM product_sales
     WHERE 1=1
     GROUP BY product_id
    HAVING Min(sales_date) != DATE '1900-01-01'
    )

DDL Statements (Group DDLSALES)

In this group we need a post_query_sql step to drop the created objects so that the execution at the next data point will work.

Create product_id index (PRD_DDL)

pre_query_sql

CREATE INDEX ps_prd_n1 ON product_sales (product_id)

post_query_sql

DROP INDEX ps_prd_n1

Create sales_date index (SDT_DDL)

CREATE INDEX ps_date_n1 ON product_sales (sales_date)

post_query_sql

DROP INDEX ps_date_n1

Create table as select (CRE_DDL)

pre_query_sql

CREATE TABLE product_sales_ctas AS 
SELECT product_id,
       CASE WHEN sales_date = Min(sales_date) OVER (PARTITION BY product_id) THEN DATE '2017-01-01' ELSE sales_date END sales_date
  FROM product_sales

post_query_sql

DROP TABLE product_sales_ctas

Data Sets

The same data generator code was used as in part 1, but this time we are interested in DML where a large proportion of the records are affected, so will take the ‘shallow’ data set only, where D=2 and W is in (1, 4, 7, 10). These lead to sizes of (200K, 800K, 1.4M, 2M) records of which about half are updated or deleted or are copied by the insert.

For the DML statement group a batch is run for the given data set, with indexes present as follows:

0: No indexes
1: ps_prd_n1 (product id)
2: ps_prd_n1 (product id), ps_date_n1 (sales date)

For the DDL statement group a batch is run for the given data set, with post statement DDL dropping the created object.

Results

The detailed results can be seen in the embedded file below, including for the merge versions that are not included in the diagrams later.

Graphs

Although the DML statements were run against four data points, with results as in the embedded file above, we show graphs only at the wide point W=10, having 1M products with two records each. The graphs take the number of indexes as the x-axis. Scrollboxes are used to show elapsed time graphs at the top, while CPU and %CPU can be seen by scrolling down.

DML Times by Indexes

Elapsed Times: DML

CPU Times: DML

%CPU/Elapsed Times: DML

  • The 1-index case has the index on product id, which is not an updated column, and so the time for the update is about the same as with no indexes
  • The insert is much faster than both delete and update in all cases

DML Times Due to Indexes

Here we subtract the times in the 0-index case from the others to get estimates for the contributions to total times attributable to index processing.

%Elapsed Times due to Indexes: DML

%CPU Times due to Indexes: DML

  • The insert shows the greatest percentages due to indexes, having relatively small time when there are no indexes
  • As noted above, an index on a non-updated column has no effect on update time, but does affect insert and delete

Combined Update and Index Creation Times

Here we add the index creation times to the pure DML times to compare the total times by direct update with the time taken when you drop them first, then re-create them after the update. We also include the CTAS method.

Elapsed Total Times: Update

CPU Total Times: Update

%CPU/Elapsed Times: Update

  • For the 2-index case, where one of the indexes is on the updated column, the elapsed time is two and a half times as great for the direct update compared with dropping and re-creating the indexes
  • You could save a bit of time by leaving the non-updated-column index in place as that has no impact on the update (although I did not do this here)
  • The CTAS method was much faster than the others
  • If you scroll down you will see that the %CPU time is very high for CTAS, close to 100%, whereas for the other methods it’s less than a third. This is no doubt related to the absence of undo processing noted by Tom Kyte in the article linked earlier:

And remember, they never create UNDO for the CREATE TABLE or CREATE INDEX commands

Combined Insert/Delete and Index Creation Times

Here we add the index creation times to the pure DML times to compare the total times by direct update with the time taken when you drop them first, then re-create them after the update.

Elapsed Total Times: Insert/Delete

CPU Total Times: Insert/Delete

%CPU/Elapsed Times: Insert/Delete

  • For the 2-index case, the elapsed times are about four, and two and a half, times as great for the direct DML compared with dropping and re-creating the indexes, for insert and delete respectively
  • In fact, the times taken in creating the indexes are quite small compared to the DML, so that the times increase much more slowly with number of indexes for the drop/re-create methods
  • The %CPU time is significantly higher for the drop and re-create indexes methods

Conclusions

In part 2 of this article we compared timings for the DML statements on the example problem with and without indexes where a large proportion of records are affected. Findings include:

  • Dropping the indexes before doing the DML, then adding them back again usually gives much better performance than applying the DML directly, depending on the type of DML and the columns in the index
  • The CTAS method for updates is even faster, and can also be applied for the inserts and deletes, although we didn’t include this here
  • Graphs show that CTAS has very high %CPU, reflecting the absence of undo processing mentioned in the linked article from Oracle Magazine

The example problem, together with all code used in both parts of this article, and the revisions made to the framework are available here: A Framework for Dimensional Benchmarking of SQL Performance. The framework, as presented at the 2017 Ireland Oracle User Group conference, Dimensional Performance Benchmarking of SQL – IOUG Presentation has had significant upgrades made to to allow benchmarking of both DML and DDL (previously it allowed for DML as a pre-query step only, for example to materialise a subquery with indexes).

Part 1 of this article is here: Benchmarking Oracle DML: A Case Study I – Update vs Merge, An Example.






Benchmarking Oracle DML: A Case Study I – Update vs Merge, An Example

Some time ago I was involved in performing a one-off update of a column in an Oracle table of 250 million records, of which about 50 million would be updated. In the initial attempt, in development, the update ran for a very long time before aborting with the error:

ORA-30036: unable to extend segment by 8 in undo tablespace ‘UNDOTBS’

I noted that the updated column featured in two indexes, and realised that the update would likely entail much more work in updating the indexes than in the update of the table. I reasoned that, because the index data are physically stored in a way that depends on the values, changing the values could involve a lot of physical restructuring on disk. Updating the values in the table, on the other hand, probably would not involve much restructuring of the table data, if the storage requirements of the new values were similar to those of the old ones, which they were. Anyway, we changed the process to have it drop the indexes, do the update, then recreate the indexes. There are other, possibly even faster, ways of doing this (as we’ll see), but the change allowed the whole process to complete in around an hour.

Some time later I noticed an OTN thread, Improve query performance instead of aggregrate function, in which the poster requested help in improving the performance of an Oracle update statement (the title is a little misleading). Recalling my earlier experience, I suggested that dropping any indexes that included the updated column would improve performance. As it turned out, the poster stated that the table did not have any indexes, and other posters suggested various alternative ways of doing the update.

In the example there is a product sales table having product id and sales date columns (and a few others unspecified), and the update sets the sales date to a constant value for the earliest sales date for each product. The data model and SQL in the thread are relatively simple, and it occurred to me that it would be interesting to use the example to do a case study of the performance impact of indexes on updates and other DML statements.

In this two-part article I’ll use parameterised datasets to do two sets of comparisons: First, we’ll compare the performance of the original poster’s update statement with a slightly modified version of another poster’s solution using ‘merge’, across a 2-dimensional grid of dataset points with no indexes. Second (in part 2), we’ll compare the performance of both forms of update, plus related delete and insert statements on the 1-dimensional ‘slice’ of dataset points where the updates apply to about half of the total records. In the second set, we’ll run the statements in the presence of: (i) No indexes; (ii) product id index only; (iii) product id and sales date indexes, and we’ll also compare with a Create Table As Select (CTAS) approach.

To do the comparisons, I use my own Oracle benchmarking framework, which I presented at the 2017 Ireland Oracle User Group conference, Dimensional Performance Benchmarking of SQL – IOUG Presentation. The framework, which has been upgraded during this work to cover DML and DDL more fully, including all code for this article, is available on GitHub: A Framework for Dimensional Benchmarking of SQL Performance.

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

Test Data

Data Structure

CREATE TABLE product_sales (product_id NUMBER, sales_date DATE)
/
CREATE INDEX ps_date_n1 ON product_sales (sales_date)
/
CREATE INDEX ps_prd_n1 ON product_sales (product_id)
/

Data Generator

The data generation procedure takes two parameters, a width parameter being the number of products, and a depth parameter being the number of records per product. Rows are generated using two cross-joined subqueries that each generate rows via the common ‘select from dual connect by’ method, as follows:

INSERT INTO product_sales
WITH prod_gen AS (
  SELECT LEVEL + (i - 1)*c_max_rowgen product_id
    FROM DUAL
    CONNECT BY LEVEL <= l_wide_batch_sizes(i)
), day_gen AS (
  SELECT LEVEL rn
    FROM DUAL
    CONNECT BY LEVEL <= p_point_deep
)
SELECT p.product_id, c_start_date + Mod (Abs (DBMS_Random.Random), c_n_days_in_century)
  FROM prod_gen p
 CROSS JOIN day_gen d;

Note that:

  • Product ids are sequential
  • Dates are randomized across the century from 1 January 1900
  • A call is made to DBMS_Random.Seed at the start of the procedure to ensure each call with the same parameters will get the same (pseudo-)random dates
  • The insert occurs within a loop with index i in order to limit the number of rows generated at once (see below for reason)

The reason for limiting the number of rows generated by inserting within a loop is that the Oracle tree-walk mechanism uses memory increasing with number of levels traversed, and I hit the dreaded

Completed with error: ORA-30009: Not enough memory for CONNECT BY operation

There are various ways of avoiding this, including this, Generating lots of rows using connect by – safely!, which suggests cross-joining as many tree-walk subqueries as are necessary to generate the overall number from tree-walks of smaller size. In our situation, however, this approach is problematic because we pass in the desired number as a parameter. To get the exact number desired we would have to create the statement dynamically and create a set of subqueries with the subquery limits being products of the prime factors of the number. This is impractical and in any case the highest prime factor could be too large. For this reason the inserts are performed in batches within a loop over an array containing the batch sizes.

Lower depth values correspond to larger proportions of records to be updated, with smaller numbers of values to be sorted within the product id partitions. For example, at depth 2, around half the records are updated, while at depth 100 around 1% are updated.

Test Case 1: Update vs Merge, no Indexes

Both update and merge statements below are based on statements in the thread mentioned above. I reformatted them and altered the merge to make it consistent with the update in updating all records of minimum date where duplication occurs.

One other change may be worth highlighting: As Steven Feuerstein noted recently, About the Date Literal in Oracle Database, the date literal seems to be under-used by Oracle developers, but it is neater than using To_Date with an explicit format mask. I replaced

TO_DATE (‘01.01.2017’, ‘dd.mm.yyyy’)

with the literal equivalent

DATE ‘2017-01-01’

I incidentally also changed the year for my test data.

Update/Merge Statements

Update (UPD_DML)

UPDATE product_sales sd
   SET sd.sales_date = DATE '1900-01-01'
 WHERE 1=1 AND sd.sales_date = ( 
   SELECT Min(sd2.sales_date)
     FROM product_sales sd2
    WHERE sd.product_id = sd2.product_id   
 )
   AND sd.sales_date != DATE '1900-01-01'

This is essentially the same statement as in the original post by user12251389.

Merge (MRG_DML)

MERGE INTO product_sales tgt
USING (SELECT *
       FROM (
         SELECT rowid arowid, product_id, DATE '1900-01-01' sales_date,
                sales_date AS old_sales_date,
                Rank() OVER (PARTITION BY product_id ORDER BY sales_date) rn
         FROM   product_sales    
       )
       WHERE rn = 1 AND 0 = Decode(sales_date, old_sales_date, 1, 0)) src
   ON (tgt.rowid = src.arowid)
 WHEN MATCHED THEN UPDATE
  SET tgt.sales_date = src.sales_date

This is essentially the same statement as in the post by responder Paulzip, except that where he had Row_Number I have put Rank to allow for duplicate updating in order to be consistent with the update statement.

For my performance testing I added two hinted versions of the above merge, the first of which has the following hint:

no_swap_join_inputs(@”SEL$F5BB74E1″ “TGT”@”SEL$1”)

while the second has two hints. These rather strange-looking hints will be explained below in relation to execution plans.

Results

The four SQL statements were run across a 2-dimensional grid of width and depth data points. After each update the number of records is saved against data point and SQL statement, and the transaction is rolled back. The elapsed and CPU times, and the CPU percentages, are displayed below for wide and deep slices of the grid in the two scrollboxes below. Of course, data creation and rollback times are not included, although the instrumentation reports them separately in the log file.

The detailed results can be seen in the embedded file below.

Wide Slice Graphs

In the wide slice there are 10 million products and from 2 to 100 dates per product, with a maximum of 100 million records. At D=2 about half the records are updated, and at D=100 about 1% are updated.

Elapsed Times: Wide Slice

CPU Times: Wide Slice

CPU Percentages: Wide Slice

Deep Slice Graphs

In the deep slice there are 100 dates per product and from 100,000 to 1,000,000 products, with a maximum of 100 million records. About 1% of the records are updated.

Elapsed Times: Deep Slice

CPU Times: Deep Slice

CPU Percentages: Deep Slice

The results show:

  • The update SQL (UPD_DML) is faster at all data points than the merges, being generally twice as fast or better at the deep data points
  • The
  • At the shallow data points (D=2), the timings are much closer, reflecting in part the fact that proportionally more times goes to doing the actual updating
  • The two hinted versions of the merge are significantly faster than the unhinted version (MRG_DML), and we’ll discuss this in relation to execution plans below

It is interesting to note that the update statement from the original poster in the OTN thread is faster than (the small variation on) the more complex merge statement proposed in the thread to improve performance! I considered whether my substitution of Rank for Row_Number might have been to blame, and found that it did have a significant effect on the execution plan, where it caused a hash join to be used in place of a nested loops join. In fact, the hinted version MRG_HT2 has the same plan as would the Row_Number version, and is faster than the unhinted merge, but still slower than the update.

Execution Plans

The benchmarking framework ensures that the SQL engine hard-parses, and thus re-calculates the optimal execution plan, at each instance, by inserting a functionally meaningless condition x=x into the statement, where x is the number given by the current timestamp to the millisecond formatted thus: To_Char(SYSTIMESTAMP, ‘yymmddhh24missff3’). This results in the SQL id, which is the hash of the SQL text, being different each time.

The execution plan for each SQL statement execution is printed to log, and was the same for each data point. The plans are listed in the scrollbox below at the highest data point.

Execution Plan for Update (UPD_DML)

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT       |               |      1 |        |      0 |00:01:04.49 |    1516K|   5474 |       |       |          |
|   1 |  UPDATE                | PRODUCT_SALES |      1 |        |      0 |00:01:04.49 |    1516K|   5474 |       |       |          |
|*  2 |   HASH JOIN            |               |      1 |   1001K|    998K|00:00:56.15 |     496K|   5474 |    53M|  8523K|   52M (0)|
|   3 |    VIEW                | VW_SQ_1       |      1 |   1001K|    997K|00:00:37.33 |     248K|      0 |       |       |          |
|*  4 |     FILTER             |               |      1 |        |    997K|00:00:37.15 |     248K|      0 |       |       |          |
|   5 |      SORT GROUP BY     |               |      1 |   1001K|   1000K|00:00:37.09 |     248K|      0 |    70M|    15M|   62M (0)|
|   6 |       TABLE ACCESS FULL| PRODUCT_SALES |      1 |    100M|    100M|00:00:01.62 |     248K|      0 |       |       |          |
|*  7 |    TABLE ACCESS FULL   | PRODUCT_SALES |      1 |     99M|     99M|00:00:12.53 |     248K|   5474 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SD"."SALES_DATE"="MIN(SD2.SALES_DATE)" AND "SD"."PRODUCT_ID"="ITEM_1")
   4 - filter(MIN("SD2"."SALES_DATE")<>TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   7 - filter("SD"."SALES_DATE"<>TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Execution Plan for Merge (MRG_DML)

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT             |               |      1 |        |      0 |00:02:30.40 |    1516K|    428K|    428K|       |       |          |         |
|   1 |  MERGE                      | PRODUCT_SALES |      1 |        |      0 |00:02:30.40 |    1516K|    428K|    428K|       |       |          |         |
|   2 |   VIEW                      |               |      1 |        |    998K|00:02:02.03 |     496K|    428K|    428K|       |       |          |         |
|*  3 |    HASH JOIN                |               |      1 |    100M|    998K|00:02:01.77 |     496K|    428K|    428K|  2047M|    52M|   55M (1)|    3329K|
|   4 |     TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:09.96 |     248K|      0 |      0 |       |       |          |         |
|*  5 |     VIEW                    |               |      1 |    100M|    998K|00:01:33.29 |     248K|  15903 |  15903 |       |       |          |         |
|*  6 |      WINDOW SORT PUSHED RANK|               |      1 |    100M|   1001K|00:01:33.33 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|         |
|   7 |       TABLE ACCESS FULL     | PRODUCT_SALES |      1 |    100M|    100M|00:00:11.63 |     248K|      0 |      0 |       |       |          |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("TGT".ROWID="from$_subquery$_007"."AROWID")
   5 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   6 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

Execution Plan for Merge with Join Inputs Hint(MHT_DML)

----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT             |               |      1 |        |      0 |00:02:02.01 |    1516K|  19175 |  15903 |       |       |          |
|   1 |  MERGE                      | PRODUCT_SALES |      1 |        |      0 |00:02:02.01 |    1516K|  19175 |  15903 |       |       |          |
|   2 |   VIEW                      |               |      1 |        |    998K|00:01:51.06 |     496K|  19175 |  15903 |       |       |          |
|*  3 |    HASH JOIN                |               |      1 |    100M|    998K|00:01:50.84 |     496K|  19175 |  15903 |    77M|  5796K|  107M (0)|
|*  4 |     VIEW                    |               |      1 |    100M|    998K|00:01:32.03 |     248K|  15903 |  15903 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK|               |      1 |    100M|   1001K|00:01:32.12 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|
|   6 |       TABLE ACCESS FULL     | PRODUCT_SALES |      1 |    100M|    100M|00:00:10.95 |     248K|      0 |      0 |       |       |          |
|   7 |     TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:10.90 |     248K|   3272 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("TGT".ROWID="from$_subquery$_007"."AROWID")
   4 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   5 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

Execution Plan for Merge with Nested Loops Hint(MH2_DML)

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT               |               |      1 |        |      0 |00:02:01.99 |    1516K|  27343 |  15903 |       |       |          |
|   1 |  MERGE                        | PRODUCT_SALES |      1 |        |      0 |00:02:01.99 |    1516K|  27343 |  15903 |       |       |          |
|   2 |   VIEW                        |               |      1 |        |    998K|00:01:34.39 |     496K|  27343 |  15903 |       |       |          |
|   3 |    NESTED LOOPS               |               |      1 |    100M|    998K|00:01:34.16 |     496K|  27343 |  15903 |       |       |          |
|*  4 |     VIEW                      |               |      1 |    100M|    998K|00:01:30.08 |     248K|  15903 |  15903 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK  |               |      1 |    100M|   1001K|00:01:30.03 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|
|   6 |       TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:11.43 |     248K|      0 |      0 |       |       |          |
|   7 |     TABLE ACCESS BY USER ROWID| PRODUCT_SALES |    998K|      1 |    998K|00:00:04.03 |     247K|  11440 |      0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   5 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

The choices made by Oracle’s Cost Based Optimiser (CBO) in construction of the execution plan are crucially dependent on the cardinality estimates it makes at each step. The outputs above display these estimates along with the actual numbers of rows returned. We know that the total number of records is 100M, and that approximately 1M of these are updated at the extreme wide/deep data point shown. How accurate are the cardinality estimates in the plans above? Let’s take them in turn.

UPD_DML
Here the update subquery results in a hash join between the table and an internal view in which the records from a separate scan of the table are sorted and filtered to produce the records with minimum date value by product. The estimated cardinality of the view in step #3 is 1001K, which is close to the actual cardinality of 997K.

The view is used as the build table with the table itself in step #7 being used as the probe table. This looks like the right strategy because the smaller rowset is generally preferred as the build table, used to build the hash table for the join.

MRG_DML
The merge statement also has at its heart a hash join between the table and an internal view, but this time the build and probe tables are reversed, and we observe that the cardinality estimate for the view, in step #5, is 100M, whereas the actual is 998K. The CBO has not been able to detect that the rn = 1 condition on the rank function would reduce the cardinality by a factor of about a hundred, so either choice of build table would look similar.

MHT_DML
I wondered how big an effect making the ‘wrong’ choice for the build table would have, and so looked to include a hint to force the ‘correct’ choice, and made this the statement MHT_DML. I wrote an article on the subject of hash join ‘inner’ ordering (as I called it there) last year, A Note on Oracle Join Orders and Hints, which used a simple 3-table query with no subqueries. In simple cases such as that one it is easy to force the choice of build table using the hints (no_)swap_join_inputs (tab) where tab is the alias of the table to be joined to the current rowset.

In more complicated situations with subqueries, such as we have in our merge statement, it is a little harder since we need to specify the target using internal query block names that are not in the original statement. Fortunately, there is an easy way to get the desired hint: The execution plans above are displayed using Oracle’s DBMS_XPlan.Display_Cursor API, and if you pass the keyword OUTLINE to this API, it returns the list of fully specified hints that determine the execution plan. We can extract the relevant hints from this list and modify if necessary. In the unhinted outline there is the hint:

swap_join_inputs(@”SEL$F5BB74E1″ “TGT”@”SEL$1”)

so to reverse the build/probe choice we simply change swap_join_inputs to no_swap_join_inputs.

This improves performance by 19% at the extreme data point.

Incidentally, Tom Kyte discusses in detail how to use these outline hints to investigate different plans and to create baselines using SQL Plan Management in a 2014 Oracle Magazine article (referenced also in part 2 of this current article): On Table Updates and SQL Plan Baselines.

Another way of getting at the hint syntax is to use SQL Developer, as shown here:

MH2_DML
As mentioned above, I wondered what effect the use of Rank instead of the Row_Number used in the OTN thread had on performance. To check this I replaced Rank with Row_Number, ran an Explain Plan, and found quite a big difference in the plan (a hash join changing to a nested loops join), despite the fact that the difference in actual cardinality is extremely small so that the same plan should be optimal for both.

I followed the same approach as in MHT_DML to obtain the hints that would force the same plan as the Row_Number version via the SQL outline. This time two hints were required (you could just take the whole outline set of course):

leading(@”SEL$F5BB74E1″ “from$_subquery$_007″@”SEL$2” “TGT”@”SEL$1″)
use_nl(@”SEL$F5BB74E1” “TGT”@”SEL$1”)

This version perfroms slightly better in CPU terms than the firsted hinted version, with smaller differences in elapsed times, and they perform very similarly at the higher data points.

Conclusions

In part 1 of this article we demonstrated the use of my benchmarking framework for comparing DML with detailed timings, statistics and execution plans across a 2-dimensional grid. In terms of the problem addressed, and general learnings, a couple of points can be made:

  • The ‘obvious’ Update turned out to be faster as well as simpler than a more complicated Merge; Oracle’s own transformation of the update subquery, into a join between the table and an internal view, performed better than the hand-crafted attempt
  • The OUTLINE parameter to DBMS_XPlan.Display_Cursor is very useful to extract more difficult hint syntax (you can also get it from SQL Developer, by right-clicking the hints displayed below an execution plan)
  • We also showed, using a gif example, how to get these hints from SQL Developer
  • Regarding memory problems when generating large numbers of rows for test data, we linked to one solution, and provided an alternative for when that one is inapplicable

The example problem, together with all code used in both parts of this article, and the revisions made to the framework are available here: A Framework for Dimensional Benchmarking of SQL Performance.

Part 2 of the article, which benchmarks different methods for DML in the presence of indexes, is here: Benchmarking Oracle DML: A Case Study II – Effects of Indexes