OUG Ireland Conference 2018

I submitted an abstract for the OUG Ireland 2018 conference, which was accepted for inclusion last Friday. The conference takes place on 22 and 23 March 2018 in the Gresham hotel on O'Connell Street in Dublin, where I presented at last year's conference. I'll link to the agenda when it is public next Monday (and here it is, a day late but never mind 🙂 OUG Ireland 2018 Conference - Agenda ).

Twitter hashtag: #ukoug_ire






Knapsacks and Networks in SQL

I opened a GitHub account, Brendan's GitHub Page last year and have added a number of projects since then, in PL/SQL and other 3GL languages. Partly in response to a request for the code for one of my blog articles on an interesting SQL problem, I decided recently to create a new repo for the SQL behind a group of articles on solving difficult combinatorial optimisation problems via 'advanced' SQL techniques such as recursive subquery factoring and model clause, sql_demos - Brendan's repo for interesting SQL. It includes installation scripts with object creation and data setup, and scripts to run the SQL on the included datasets. The idea is that anyone with the pre-requisites should be able to reproduce my results within a few minutes of downloading the repo.

[Left image from Knapsack problem; right image copied from Chapter 11 Dynamic Programming]

In this article I embed each of the earlier articles relevant to the GitHub repo with a brief preamble.

The first two articles are from January 2013 and use recursive subquery factoring to find exact solutions for the single and multiple knapsack problem, and also include PL/SQL solutions for comparison. They avoid the 'brute force' approach by truncating search paths as soon as limit constraints are exceeded. The cumulative paths are stored in string variables passed through the iterations (which would not be possible with the older Connect By hierarchical syntax).

In these articles I illustrate the nature of the problems using Visio diagrams, and include dimensional performance benchmarking results, using a technique that I presented on at last year's Ireland OUG conference: Dimensional Performance Benchmarking of SQL - IOUG Presentation. I also illustrate the queries using my own method for diagramming SQL queries.

A Simple SQL Solution for the Knapsack Problem (SKP-1), January 2013

An SQL Solution for the Multiple Knapsack Problem (SKP-m), January 2013

The next article uses Model clause to find a more general solution to a problem posed on AskTom, as a 'bin fitting' problem. I also solved the problem by other methods including recursive subquery factoring. I illustrate the problem itself, as well as the Model iteration scheme using Visio diagrams, and again include dimensional performance benchmarking. The results show how quadratic performance variation can be turned into much faster linear variation by means of a temporary table in this kind of problem.

SQL for the Balanced Number Partitioning Problem, May 2013

This article arose from a question on OTN, and concerns a type of knapsack or bin-fitting problem that is quite tricky to solve in SQL, where the items fall into categories on which there are separate constraints. I introduced a new idea here, to filter out unpromising paths within recursive subquery factoring by means of analytic functions, in order to allow the technique to be used to generate solutions for larger problems without guaranteed optimality, but in shorter time. Two realistic datasets were used, one from the original poster, and another I got from a scraping website.

SQL for the Fantasy Football Knapsack Problem, June 2013

This article is on a classic 'hard' optimisation problem, and uses recursive subquery factoring with the same filtering technique as the previous article, and shows that it's possible to solve a problem involving 312 American cities quite quickly in pure SQL using the approximation technique. It also uses a simple made-up example dataset to illustrate its working.

SQL for the Travelling Salesman Problem, July 2013

The following two articles concern finding shortest paths between given nodes in a network, and arose from a question on OTN. The first one again uses recursive subquery factoring with a filtering mechanism to exclude paths as early as possible, in a similar way to the approximative solutios methods in the earlier articles. In this case, however, reasoning about the nature of the problem shows that we are not in fact sacrificing optimality. The article has quite a lot of explanatory material on how the SQL works, and uses small dataset examples.

The second article considers how to improve performance further by obtaining a preliminary approximate solution that can be used as a bounding mechanism in a second step to find the exact solutions. This article uses two realistic networks as examples, including one having 428,156 links.

SQL for Shortest Path Problems, April 2015

SQL for Shortest Path Problems 2: A Branch and Bound Approach, May 2015

In the article above I cited results from a general network analysis package I had developed that obtains all the distinct connected subnetworks with their structures in an efficient manner using PL/SQL recursion. It is worth noting that for that kind of problem recursive SQL alone is very inefficient, and I wrote the following article to try to explain why that is so, and why the Connect By syntax is generally much worse than recursive subquery factoring.

Recursive SQL for Network Analysis, and Duality, September 2015

The PL/SQL package mentioned, which I think implements a 'named' algorithm although I didn't know that when I wrote it (I don't recall the name right now, sorry 🙁 ), is available on GitHub: Brendan's network structural analysis Oracle package, with article:

PL/SQL Pipelined Function for Network Analysis, May 2015






An Odd Performance Problem with Update - Twitter Thread

I came across an odd performance problem in some legacy Oracle PL/SQL code that I was able to resolve by replacing an update statement with a merge. I abstracted the essence of the problem using the simple table from my last blog post, plus a summary table. Unlike in the previous article, the simple update did not work very well. I made a short Twitter thread on my findings...

Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production






Benchmarking Oracle DML: A Case Study II - Effects of Indexes

This is the second part of a two-part article. The first part, Benchmarking Oracle DML: A Case Study I - Update vs Merge, An Example, compares an Update and a Merge statement for performance in updating a table involving a subquery, in the absence of indexes. The first part describes the problem and the mechanism for generating parameterised test data.

In this second part, we are interested in the effects on performance of indexes for DML statements that affect a large proportion of the table. To that end, we take as data sets the 1-dimensional 'shallow slice' of data set points from part 1 where the updates apply to about half of the total records. We'll run the statements in the presence of: (i) No indexes; (ii) product id index only; (iii) product id and sales date indexes. Note that the only updated column is sales date.

The idea behind the analysis is of course that when performing a large batch DML we may be able to drop the indexes first, then recreate them after the DML, depending on our environment. Obviously, if we save time on the DML this will be offset to some extent by the need to recreate the indexes. Therefore we will also time the index creations, and for good measure we'll include a timing of the well-known CTAS approach for bulk updates, where a new table is created by selecting from the table to be updated, and then the old table dropped and the new one renamed.

Tom Kyte discusses issues around this kind of bulk update in a 2014 Oracle Magazine article (referenced also in part 1 of this current article) On Table Updates and SQL Plan Baselines. He notes, in particular, that the CTAS approach benefits from avoiding undo creation.

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

DML and DDL Statements

In this part 2 we have two groups to test: The first is DML, including the update from part 1, and adding an insert and a delete statement. The group actually includes the three versions of the merge from part 1, but as those were always slower than the update, we'll exclude them from the article.

The second group has the two create index statements and the Create Table As Select. We can add timings from this group to the DML timings to compare DML in the presence of one or both indexes, with doing it without indexes, then recreating afterwards. We can also compare the update approaches with the CTAS approach with index creation added in to its timings.

DML Statements (Group DMLSALES)

In addition to the SQL code below, there is also a condition 'WHERE 1=1' added to all the statements except the index creations, which is a placeholder with the framework package replacing the '1's with the formatted timestamp mentioned in part 1.

Update (UPD_DML)

This statement updates the records with minimum date by product with the hard-coded minimum date.

UPDATE product_sales sd
   SET sd.sales_date = DATE '1900-01-01'
 WHERE 1=1 AND sd.sales_date = ( 
   SELECT Min(sd2.sales_date)
     FROM product_sales sd2
    WHERE sd.product_id = sd2.product_id   
 )
   AND sd.sales_date != DATE '1900-01-01'

Insert (INS_DML)

This statement selects the records that the update updates and re-inserts them with the hard-coded minimum date replacing the minimums by product.

INSERT INTO product_sales
WITH date_mins AS (
    SELECT product_id
      FROM product_sales
     GROUP BY product_id
     HAVING Min(sales_date) != DATE '1900-01-01'
)
SELECT product_id, DATE '1900-01-01'
  FROM date_mins

Delete (DEL_DML)

This statement deletes the records where the update updated them.

DELETE product_sales sd
  WHERE 1=1 AND (product_id, sales_date) IN (
    SELECT product_id, Min(sales_date)
      FROM product_sales
     WHERE 1=1
     GROUP BY product_id
    HAVING Min(sales_date) != DATE '1900-01-01'
    )

DDL Statements (Group DDLSALES)

In this group we need a post_query_sql step to drop the created objects so that the execution at the next data point will work.

Create product_id index (PRD_DDL)

pre_query_sql

CREATE INDEX ps_prd_n1 ON product_sales (product_id)

post_query_sql

DROP INDEX ps_prd_n1

Create sales_date index (SDT_DDL)

CREATE INDEX ps_date_n1 ON product_sales (sales_date)

post_query_sql

DROP INDEX ps_date_n1

Create table as select (CRE_DDL)

pre_query_sql

CREATE TABLE product_sales_ctas AS 
SELECT product_id,
       CASE WHEN sales_date = Min(sales_date) OVER (PARTITION BY product_id) THEN DATE '2017-01-01' ELSE sales_date END sales_date
  FROM product_sales

post_query_sql

DROP TABLE product_sales_ctas

Data Sets

The same data generator code was used as in part 1, but this time we are interested in DML where a large proportion of the records are affected, so will take the 'shallow' data set only, where D=2 and W is in (1, 4, 7, 10). These lead to sizes of (200K, 800K, 1.4M, 2M) records of which about half are updated or deleted or are copied by the insert.

For the DML statement group a batch is run for the given data set, with indexes present as follows:

0: No indexes
1: ps_prd_n1 (product id)
2: ps_prd_n1 (product id), ps_date_n1 (sales date)

For the DDL statement group a batch is run for the given data set, with post statement DDL dropping the created object.

Results

The detailed results can be seen in the embedded file below, including for the merge versions that are not included in the diagrams later.

Graphs

Although the DML statements were run against four data points, with results as in the embedded file above, we show graphs only at the wide point W=10, having 1M products with two records each. The graphs take the number of indexes as the x-axis. Scrollboxes are used to show elapsed time graphs at the top, while CPU and %CPU can be seen by scrolling down.

DML Times by Indexes

Elapsed Times: DML

CPU Times: DML

%CPU/Elapsed Times: DML

  • The 1-index case has the index on product id, which is not an updated column, and so the time for the update is about the same as with no indexes
  • The insert is much faster than both delete and update in all cases

DML Times Due to Indexes

Here we subtract the times in the 0-index case from the others to get estimates for the contributions to total times attributable to index processing.

%Elapsed Times due to Indexes: DML

%CPU Times due to Indexes: DML

  • The insert shows the greatest percentages due to indexes, having relatively small time when there are no indexes
  • As noted above, an index on a non-updated column has no effect on update time, but does affect insert and delete

Combined Update and Index Creation Times

Here we add the index creation times to the pure DML times to compare the total times by direct update with the time taken when you drop them first, then re-create them after the update. We also include the CTAS method.

Elapsed Total Times: Update

CPU Total Times: Update

%CPU/Elapsed Times: Update

  • For the 2-index case, where one of the indexes is on the updated column, the elapsed time is two and a half times as great for the direct update compared with dropping and re-creating the indexes
  • You could save a bit of time by leaving the non-updated-column index in place as that has no impact on the update (although I did not do this here)
  • The CTAS method was much faster than the others
  • If you scroll down you will see that the %CPU time is very high for CTAS, close to 100%, whereas for the other methods it's less than a third. This is no doubt related to the absence of undo processing noted by Tom Kyte in the article linked earlier:

And remember, they never create UNDO for the CREATE TABLE or CREATE INDEX commands

Combined Insert/Delete and Index Creation Times

Here we add the index creation times to the pure DML times to compare the total times by direct update with the time taken when you drop them first, then re-create them after the update.

Elapsed Total Times: Insert/Delete

CPU Total Times: Insert/Delete

%CPU/Elapsed Times: Insert/Delete

  • For the 2-index case, the elapsed times are about four, and two and a half, times as great for the direct DML compared with dropping and re-creating the indexes, for insert and delete respectively
  • In fact, the times taken in creating the indexes are quite small compared to the DML, so that the times increase much more slowly with number of indexes for the drop/re-create methods
  • The %CPU time is significantly higher for the drop and re-create indexes methods

Conclusions

In part 2 of this article we compared timings for the DML statements on the example problem with and without indexes where a large proportion of records are affected. Findings include:

  • Dropping the indexes before doing the DML, then adding them back again usually gives much better performance than applying the DML directly, depending on the type of DML and the columns in the index
  • The CTAS method for updates is even faster, and can also be applied for the inserts and deletes, although we didn't include this here
  • Graphs show that CTAS has very high %CPU, reflecting the absence of undo processing mentioned in the linked article from Oracle Magazine

The example problem, together with all code used in both parts of this article, and the revisions made to the framework are available here: A Framework for Dimensional Benchmarking of SQL Performance. The framework, as presented at the 2017 Ireland Oracle User Group conference, Dimensional Performance Benchmarking of SQL - IOUG Presentation has had significant upgrades made to to allow benchmarking of both DML and DDL (previously it allowed for DML as a pre-query step only, for example to materialise a subquery with indexes).

Part 1 of this article is here: Benchmarking Oracle DML: A Case Study I - Update vs Merge, An Example.






Benchmarking Oracle DML: A Case Study I - Update vs Merge, An Example

Some time ago I was involved in performing a one-off update of a column in an Oracle table of 250 million records, of which about 50 million would be updated. In the initial attempt, in development, the update ran for a very long time before aborting with the error:

ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS'

I noted that the updated column featured in two indexes, and realised that the update would likely entail much more work in updating the indexes than in the update of the table. I reasoned that, because the index data are physically stored in a way that depends on the values, changing the values could involve a lot of physical restructuring on disk. Updating the values in the table, on the other hand, probably would not involve much restructuring of the table data, if the storage requirements of the new values were similar to those of the old ones, which they were. Anyway, we changed the process to have it drop the indexes, do the update, then recreate the indexes. There are other, possibly even faster, ways of doing this (as we'll see), but the change allowed the whole process to complete in around an hour.

Some time later I noticed an OTN thread, Improve query performance instead of aggregrate function, in which the poster requested help in improving the performance of an Oracle update statement (the title is a little misleading). Recalling my earlier experience, I suggested that dropping any indexes that included the updated column would improve performance. As it turned out, the poster stated that the table did not have any indexes, and other posters suggested various alternative ways of doing the update.

In the example there is a product sales table having product id and sales date columns (and a few others unspecified), and the update sets the sales date to a constant value for the earliest sales date for each product. The data model and SQL in the thread are relatively simple, and it occurred to me that it would be interesting to use the example to do a case study of the performance impact of indexes on updates and other DML statements.

In this two-part article I'll use parameterised datasets to do two sets of comparisons: First, we'll compare the performance of the original poster's update statement with a slightly modified version of another poster's solution using 'merge', across a 2-dimensional grid of dataset points with no indexes. Second (in part 2), we'll compare the performance of both forms of update, plus related delete and insert statements on the 1-dimensional 'slice' of dataset points where the updates apply to about half of the total records. In the second set, we'll run the statements in the presence of: (i) No indexes; (ii) product id index only; (iii) product id and sales date indexes, and we'll also compare with a Create Table As Select (CTAS) approach.

To do the comparisons, I use my own Oracle benchmarking framework, which I presented at the 2017 Ireland Oracle User Group conference, Dimensional Performance Benchmarking of SQL - IOUG Presentation. The framework, which has been upgraded during this work to cover DML and DDL more fully, including all code for this article, is available on GitHub: A Framework for Dimensional Benchmarking of SQL Performance.

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

Test Data

Data Structure

CREATE TABLE product_sales (product_id NUMBER, sales_date DATE)
/
CREATE INDEX ps_date_n1 ON product_sales (sales_date)
/
CREATE INDEX ps_prd_n1 ON product_sales (product_id)
/

Data Generator

The data generation procedure takes two parameters, a width parameter being the number of products, and a depth parameter being the number of records per product. Rows are generated using two cross-joined subqueries that each generate rows via the common 'select from dual connect by' method, as follows:

INSERT INTO product_sales
WITH prod_gen AS (
  SELECT LEVEL + (i - 1)*c_max_rowgen product_id
    FROM DUAL
    CONNECT BY LEVEL <= l_wide_batch_sizes(i)
), day_gen AS (
  SELECT LEVEL rn
    FROM DUAL
    CONNECT BY LEVEL <= p_point_deep
)
SELECT p.product_id, c_start_date + Mod (Abs (DBMS_Random.Random), c_n_days_in_century)
  FROM prod_gen p
 CROSS JOIN day_gen d;

Note that:

  • Product ids are sequential
  • Dates are randomized across the century from 1 January 1900
  • A call is made to DBMS_Random.Seed at the start of the procedure to ensure each call with the same parameters will get the same (pseudo-)random dates
  • The insert occurs within a loop with index i in order to limit the number of rows generated at once (see below for reason)

The reason for limiting the number of rows generated by inserting within a loop is that the Oracle tree-walk mechanism uses memory increasing with number of levels traversed, and I hit the dreaded

Completed with error: ORA-30009: Not enough memory for CONNECT BY operation

There are various ways of avoiding this, including this, Generating lots of rows using connect by – safely!, which suggests cross-joining as many tree-walk subqueries as are necessary to generate the overall number from tree-walks of smaller size. In our situation, however, this approach is problematic because we pass in the desired number as a parameter. To get the exact number desired we would have to create the statement dynamically and create a set of subqueries with the subquery limits being products of the prime factors of the number. This is impractical and in any case the highest prime factor could be too large. For this reason the inserts are performed in batches within a loop over an array containing the batch sizes.

Lower depth values correspond to larger proportions of records to be updated, with smaller numbers of values to be sorted within the product id partitions. For example, at depth 2, around half the records are updated, while at depth 100 around 1% are updated.

Test Case 1: Update vs Merge, no Indexes

Both update and merge statements below are based on statements in the thread mentioned above. I reformatted them and altered the merge to make it consistent with the update in updating all records of minimum date where duplication occurs.

One other change may be worth highlighting: As Steven Feuerstein noted recently, About the Date Literal in Oracle Database, the date literal seems to be under-used by Oracle developers, but it is neater than using To_Date with an explicit format mask. I replaced

TO_DATE ('01.01.2017', 'dd.mm.yyyy')

with the literal equivalent

DATE '2017-01-01'

I incidentally also changed the year for my test data.

Update/Merge Statements

Update (UPD_DML)

UPDATE product_sales sd
   SET sd.sales_date = DATE '1900-01-01'
 WHERE 1=1 AND sd.sales_date = ( 
   SELECT Min(sd2.sales_date)
     FROM product_sales sd2
    WHERE sd.product_id = sd2.product_id   
 )
   AND sd.sales_date != DATE '1900-01-01'

This is essentially the same statement as in the original post by user12251389.

Merge (MRG_DML)

MERGE INTO product_sales tgt
USING (SELECT *
       FROM (
         SELECT rowid arowid, product_id, DATE '1900-01-01' sales_date,
                sales_date AS old_sales_date,
                Rank() OVER (PARTITION BY product_id ORDER BY sales_date) rn
         FROM   product_sales    
       )
       WHERE rn = 1 AND 0 = Decode(sales_date, old_sales_date, 1, 0)) src
   ON (tgt.rowid = src.arowid)
 WHEN MATCHED THEN UPDATE
  SET tgt.sales_date = src.sales_date

This is essentially the same statement as in the post by responder Paulzip, except that where he had Row_Number I have put Rank to allow for duplicate updating in order to be consistent with the update statement.

For my performance testing I added two hinted versions of the above merge, the first of which has the following hint:

no_swap_join_inputs(@"SEL$F5BB74E1" "TGT"@"SEL$1")

while the second has two hints. These rather strange-looking hints will be explained below in relation to execution plans.

Results

The four SQL statements were run across a 2-dimensional grid of width and depth data points. After each update the number of records is saved against data point and SQL statement, and the transaction is rolled back. The elapsed and CPU times, and the CPU percentages, are displayed below for wide and deep slices of the grid in the two scrollboxes below. Of course, data creation and rollback times are not included, although the instrumentation reports them separately in the log file.

The detailed results can be seen in the embedded file below.

Wide Slice Graphs

In the wide slice there are 10 million products and from 2 to 100 dates per product, with a maximum of 100 million records. At D=2 about half the records are updated, and at D=100 about 1% are updated.

Elapsed Times: Wide Slice

CPU Times: Wide Slice

CPU Percentages: Wide Slice

Deep Slice Graphs

In the deep slice there are 100 dates per product and from 100,000 to 1,000,000 products, with a maximum of 100 million records. About 1% of the records are updated.

Elapsed Times: Deep Slice

CPU Times: Deep Slice

CPU Percentages: Deep Slice

The results show:

  • The update SQL (UPD_DML) is faster at all data points than the merges, being generally twice as fast or better at the deep data points
  • The
  • At the shallow data points (D=2), the timings are much closer, reflecting in part the fact that proportionally more times goes to doing the actual updating
  • The two hinted versions of the merge are significantly faster than the unhinted version (MRG_DML), and we'll discuss this in relation to execution plans below

It is interesting to note that the update statement from the original poster in the OTN thread is faster than (the small variation on) the more complex merge statement proposed in the thread to improve performance! I considered whether my substitution of Rank for Row_Number might have been to blame, and found that it did have a significant effect on the execution plan, where it caused a hash join to be used in place of a nested loops join. In fact, the hinted version MRG_HT2 has the same plan as would the Row_Number version, and is faster than the unhinted merge, but still slower than the update.

Execution Plans

The benchmarking framework ensures that the SQL engine hard-parses, and thus re-calculates the optimal execution plan, at each instance, by inserting a functionally meaningless condition x=x into the statement, where x is the number given by the current timestamp to the millisecond formatted thus: To_Char(SYSTIMESTAMP, 'yymmddhh24missff3'). This results in the SQL id, which is the hash of the SQL text, being different each time.

The execution plan for each SQL statement execution is printed to log, and was the same for each data point. The plans are listed in the scrollbox below at the highest data point.

Execution Plan for Update (UPD_DML)

--------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT       |               |      1 |        |      0 |00:01:04.49 |    1516K|   5474 |       |       |          |
|   1 |  UPDATE                | PRODUCT_SALES |      1 |        |      0 |00:01:04.49 |    1516K|   5474 |       |       |          |
|*  2 |   HASH JOIN            |               |      1 |   1001K|    998K|00:00:56.15 |     496K|   5474 |    53M|  8523K|   52M (0)|
|   3 |    VIEW                | VW_SQ_1       |      1 |   1001K|    997K|00:00:37.33 |     248K|      0 |       |       |          |
|*  4 |     FILTER             |               |      1 |        |    997K|00:00:37.15 |     248K|      0 |       |       |          |
|   5 |      SORT GROUP BY     |               |      1 |   1001K|   1000K|00:00:37.09 |     248K|      0 |    70M|    15M|   62M (0)|
|   6 |       TABLE ACCESS FULL| PRODUCT_SALES |      1 |    100M|    100M|00:00:01.62 |     248K|      0 |       |       |          |
|*  7 |    TABLE ACCESS FULL   | PRODUCT_SALES |      1 |     99M|     99M|00:00:12.53 |     248K|   5474 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SD"."SALES_DATE"="MIN(SD2.SALES_DATE)" AND "SD"."PRODUCT_ID"="ITEM_1")
   4 - filter(MIN("SD2"."SALES_DATE")<>TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   7 - filter("SD"."SALES_DATE"<>TO_DATE(' 1900-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Execution Plan for Merge (MRG_DML)

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT             |               |      1 |        |      0 |00:02:30.40 |    1516K|    428K|    428K|       |       |          |         |
|   1 |  MERGE                      | PRODUCT_SALES |      1 |        |      0 |00:02:30.40 |    1516K|    428K|    428K|       |       |          |         |
|   2 |   VIEW                      |               |      1 |        |    998K|00:02:02.03 |     496K|    428K|    428K|       |       |          |         |
|*  3 |    HASH JOIN                |               |      1 |    100M|    998K|00:02:01.77 |     496K|    428K|    428K|  2047M|    52M|   55M (1)|    3329K|
|   4 |     TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:09.96 |     248K|      0 |      0 |       |       |          |         |
|*  5 |     VIEW                    |               |      1 |    100M|    998K|00:01:33.29 |     248K|  15903 |  15903 |       |       |          |         |
|*  6 |      WINDOW SORT PUSHED RANK|               |      1 |    100M|   1001K|00:01:33.33 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|         |
|   7 |       TABLE ACCESS FULL     | PRODUCT_SALES |      1 |    100M|    100M|00:00:11.63 |     248K|      0 |      0 |       |       |          |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("TGT".ROWID="from$_subquery$_007"."AROWID")
   5 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   6 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

Execution Plan for Merge with Join Inputs Hint(MHT_DML)

----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT             |               |      1 |        |      0 |00:02:02.01 |    1516K|  19175 |  15903 |       |       |          |
|   1 |  MERGE                      | PRODUCT_SALES |      1 |        |      0 |00:02:02.01 |    1516K|  19175 |  15903 |       |       |          |
|   2 |   VIEW                      |               |      1 |        |    998K|00:01:51.06 |     496K|  19175 |  15903 |       |       |          |
|*  3 |    HASH JOIN                |               |      1 |    100M|    998K|00:01:50.84 |     496K|  19175 |  15903 |    77M|  5796K|  107M (0)|
|*  4 |     VIEW                    |               |      1 |    100M|    998K|00:01:32.03 |     248K|  15903 |  15903 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK|               |      1 |    100M|   1001K|00:01:32.12 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|
|   6 |       TABLE ACCESS FULL     | PRODUCT_SALES |      1 |    100M|    100M|00:00:10.95 |     248K|      0 |      0 |       |       |          |
|   7 |     TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:10.90 |     248K|   3272 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("TGT".ROWID="from$_subquery$_007"."AROWID")
   4 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   5 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

Execution Plan for Merge with Nested Loops Hint(MH2_DML)

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | MERGE STATEMENT               |               |      1 |        |      0 |00:02:01.99 |    1516K|  27343 |  15903 |       |       |          |
|   1 |  MERGE                        | PRODUCT_SALES |      1 |        |      0 |00:02:01.99 |    1516K|  27343 |  15903 |       |       |          |
|   2 |   VIEW                        |               |      1 |        |    998K|00:01:34.39 |     496K|  27343 |  15903 |       |       |          |
|   3 |    NESTED LOOPS               |               |      1 |    100M|    998K|00:01:34.16 |     496K|  27343 |  15903 |       |       |          |
|*  4 |     VIEW                      |               |      1 |    100M|    998K|00:01:30.08 |     248K|  15903 |  15903 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK  |               |      1 |    100M|   1001K|00:01:30.03 |     248K|  15903 |  15903 |    70M|  2904K|   97M (1)|
|   6 |       TABLE ACCESS FULL       | PRODUCT_SALES |      1 |    100M|    100M|00:00:11.43 |     248K|      0 |      0 |       |       |          |
|   7 |     TABLE ACCESS BY USER ROWID| PRODUCT_SALES |    998K|      1 |    998K|00:00:04.03 |     247K|  11440 |      0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter(("RN"=1 AND DECODE(INTERNAL_FUNCTION("SALES_DATE"),"OLD_SALES_DATE",1,0)=0))
   5 - filter(RANK() OVER ( PARTITION BY "PRODUCT_ID" ORDER BY "SALES_DATE")<=1)

The choices made by Oracle's Cost Based Optimiser (CBO) in construction of the execution plan are crucially dependent on the cardinality estimates it makes at each step. The outputs above display these estimates along with the actual numbers of rows returned. We know that the total number of records is 100M, and that approximately 1M of these are updated at the extreme wide/deep data point shown. How accurate are the cardinality estimates in the plans above? Let's take them in turn.

UPD_DML
Here the update subquery results in a hash join between the table and an internal view in which the records from a separate scan of the table are sorted and filtered to produce the records with minimum date value by product. The estimated cardinality of the view in step #3 is 1001K, which is close to the actual cardinality of 997K.

The view is used as the build table with the table itself in step #7 being used as the probe table. This looks like the right strategy because the smaller rowset is generally preferred as the build table, used to build the hash table for the join.

MRG_DML
The merge statement also has at its heart a hash join between the table and an internal view, but this time the build and probe tables are reversed, and we observe that the cardinality estimate for the view, in step #5, is 100M, whereas the actual is 998K. The CBO has not been able to detect that the rn = 1 condition on the rank function would reduce the cardinality by a factor of about a hundred, so either choice of build table would look similar.

MHT_DML
I wondered how big an effect making the 'wrong' choice for the build table would have, and so looked to include a hint to force the 'correct' choice, and made this the statement MHT_DML. I wrote an article on the subject of hash join 'inner' ordering (as I called it there) last year, A Note on Oracle Join Orders and Hints, which used a simple 3-table query with no subqueries. In simple cases such as that one it is easy to force the choice of build table using the hints (no_)swap_join_inputs (tab) where tab is the alias of the table to be joined to the current rowset.

In more complicated situations with subqueries, such as we have in our merge statement, it is a little harder since we need to specify the target using internal query block names that are not in the original statement. Fortunately, there is an easy way to get the desired hint: The execution plans above are displayed using Oracle's DBMS_XPlan.Display_Cursor API, and if you pass the keyword OUTLINE to this API, it returns the list of fully specified hints that determine the execution plan. We can extract the relevant hints from this list and modify if necessary. In the unhinted outline there is the hint:

swap_join_inputs(@"SEL$F5BB74E1" "TGT"@"SEL$1")

so to reverse the build/probe choice we simply change swap_join_inputs to no_swap_join_inputs.

This improves performance by 19% at the extreme data point.

Incidentally, Tom Kyte discusses in detail how to use these outline hints to investigate different plans and to create baselines using SQL Plan Management in a 2014 Oracle Magazine article (referenced also in part 2 of this current article): On Table Updates and SQL Plan Baselines.

Another way of getting at the hint syntax is to use SQL Developer, as shown here:

MH2_DML
As mentioned above, I wondered what effect the use of Rank instead of the Row_Number used in the OTN thread had on performance. To check this I replaced Rank with Row_Number, ran an Explain Plan, and found quite a big difference in the plan (a hash join changing to a nested loops join), despite the fact that the difference in actual cardinality is extremely small so that the same plan should be optimal for both.

I followed the same approach as in MHT_DML to obtain the hints that would force the same plan as the Row_Number version via the SQL outline. This time two hints were required (you could just take the whole outline set of course):

leading(@"SEL$F5BB74E1" "from$_subquery$_007"@"SEL$2" "TGT"@"SEL$1")
use_nl(@"SEL$F5BB74E1" "TGT"@"SEL$1")

This version perfroms slightly better in CPU terms than the firsted hinted version, with smaller differences in elapsed times, and they perform very similarly at the higher data points.

Conclusions

In part 1 of this article we demonstrated the use of my benchmarking framework for comparing DML with detailed timings, statistics and execution plans across a 2-dimensional grid. In terms of the problem addressed, and general learnings, a couple of points can be made:

  • The 'obvious' Update turned out to be faster as well as simpler than a more complicated Merge; Oracle's own transformation of the update subquery, into a join between the table and an internal view, performed better than the hand-crafted attempt
  • The OUTLINE parameter to DBMS_XPlan.Display_Cursor is very useful to extract more difficult hint syntax (you can also get it from SQL Developer, by right-clicking the hints displayed below an execution plan)
  • We also showed, using a gif example, how to get these hints from SQL Developer
  • Regarding memory problems when generating large numbers of rows for test data, we linked to one solution, and provided an alternative for when that one is inapplicable

The example problem, together with all code used in both parts of this article, and the revisions made to the framework are available here: A Framework for Dimensional Benchmarking of SQL Performance.

Part 2 of the article, which benchmarks different methods for DML in the presence of indexes, is here: Benchmarking Oracle DML: A Case Study II - Effects of Indexes






Dimensional Benchmarking of String Splitting SQL

I noticed a question on AskTom last November concerning SQL for splitting delimited strings, Extract domain names from a column having multiple email addresses, a kind of question that arises frequently on the forums. There was some debate between reviewers Rajeshwaran Jeyabal, Stew Ashton and the AskTom guys on whether an XML-based solution performs better or worse than a more 'classic' solution based on the Substr and Instr functions and collections. AskTom's Chris Saxon noted:

For me this just highlights the importance of testing in your own environment with your own data. Just because someone produced a benchmark showing X is faster, doesn't mean it will be for you.

For me, relative performance is indeed frequently dependent on the size and 'shape' of the data used for testing. As I have my own 'dimensional' benchmarking framework, A Framework for Dimensional Benchmarking of SQL Performance, I was able to very quickly adapt Rajesh's test data to benchmark across numbers of records and numbers of delimiters, and I put the results on the thread. I then decided to take the time to expand the scope to include other solutions, and to use more general data sets, where the token lengths vary as well as the number of tokens per record.

In fact the scope expanded quite a bit, as I found more and more ways to solve the problem, and I have only now found the time to write it up. Here is a list of all the queries considered:

Queries using Connect By for row generation

  • MUL_QRY - Cast/Multiset to correlate Connect By
  • LAT_QRY - v12 Lateral correlated Connect By
  • UNH_QRY - Uncorrelated Connect By unhinted
  • RGN_QRY - Uncorrelated Connect By with leading hint
  • GUI_QRY - Connect By in main query using sys_guid trick
  • RGX_QRY - Regular expression function, Regexp_Substr

Queries not using Connect By for row generation

  • XML_QRY - XMLTABLE
  • MOD_QRY - Model clause
  • PLF_QRY - database pipelined function
  • WFN_QRY - 'WITH' PL/SQL function directly in query
  • RSF_QRY - Recursive Subquery Factor
  • RMR_QRY - Match_Recognize

Test Problem

DELIMITED_LISTS Table

CREATE TABLE delimited_lists(id INT, list_col VARCHAR2(4000))
/

Functional Test Data

The test data consist of pipe-delimited tokens ('|') in a VARCHAR2(4000) column in a table with a separate integer unique identifier. For functional testing we will add a single 'normal' record with two tokens, plus four more records designed to validate null-token edge cases as follows:

  1. Normal case, two tokens
  2. Leading null token, then two not null tokens
  3. Trailing null token, after two not null tokens
  4. Two not null tokens, with a null token in the middle
  5. Two null tokens only
     ID LIST_COL
------- ------------------------------------------------------------
      1 token_11|token_12
      2 |token_21|token_22
      3 token_31|token_32|
      4 token_41||token_42
      5 |

Functional Test Results

     ID TOKEN
------- ----------
      1 token_11
      1 token_12
      2
      2 token_21
      2 token_22
      3 token_31
      3 token_32
      3
      4 token_41
      4
      4 token_42
      5
      5

13 rows selected.

All queries returned the expected results above, except that the XML query returned 12 rows with only a single null token returned for record 5. In the performance testing, no null tokens were included, and all queries returned the same results.

Performance Test Data

Each test set consisted of 3,000 records with the list_col column containing the delimited string dependent on width (w) and depth (d) parameters, as follows:

  • Each record contains w tokens
  • Each token contains d characters from the sequence 1234567890 repeated as necessary

The output from the test queries therefore consists of 3,000*w records with a unique identifier and a token of length d. For performance testing purposes the benchmarking framework writes the results to a file in csv format, while counting only the query steps in the query timing results.

In Oracle upto version 11.2 VARCHAR2 expressions cannot be longer than 4,000 characters, so I decided to run the framework for four sets of parameters, as follows:

  • Depth fixed, high; width range low: d=18, w=(50,100,150,200)
  • Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)
  • Width fixed, low; depth range high: w=5, d=(195,390,585,780)
  • Width fixed, high; depth range low: w=150, d=(6,12,18,24)

All the queries showed strong time correlation with width, while a few also showed strong correlation with depth.

Queries

All execution plans are from the data point with Width=1800, Depth=1, which has the largest number of tokens per record.

Multiset Query (MUL_QRY)

SELECT d.id   id,
       Substr (d.list_col, Instr ('|' || d.list_col, '|', 1, t.COLUMN_VALUE), Instr (d.list_col || '|', '|', 1, t.COLUMN_VALUE) - Instr ('|' || d.list_col, '|', 1, t.COLUMN_VALUE)) token
  FROM delimited_lists d, 
       TABLE (CAST (MULTISET (SELECT LEVEL FROM DUAL CONNECT BY LEVEL <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1) AS SYS.ODCINumberlist)) t

Plan hash value: 462687286

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |   5400K|00:01:52.86 |    3009 |       |       |          |
|   1 |  NESTED LOOPS                       |                 |      1 |     24M|   5400K|00:01:52.86 |    3009 |       |       |          |
|   2 |   TABLE ACCESS FULL                 | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |       |       |          |
|   3 |   COLLECTION ITERATOR SUBQUERY FETCH|                 |   3000 |   8168 |   5400K|00:01:49.96 |       0 |       |       |          |
|   4 |    CONNECT BY WITHOUT FILTERING     |                 |   3000 |        |   5400K|00:01:48.83 |       0 |  2048 |  2048 | 2048  (0)|
|   5 |     FAST DUAL                       |                 |   3000 |      1 |   3000 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Notes on MUL_QRY

This is the 'classic' CONNECT BY solution referred to above, which appears frequently on AskTom and elsewhere, and I copied the version used by Jayesh. The somewhat convoluted casting between subquery and array and back to SQL record via multiset allows the prior table in the from list to be referenced within the inline view, which is otherwise not permitted in versions earlier than 12.1, where the LATERAL keyword was introduced.

Despite this query being regarded as the 'classic' CONNECT BY solution to string-splitting, we will find that it is inferior in performance to a query I wrote myself across all data points considered. The new query is also simpler, but is not the most efficient of all methods, as we see later.

Lateral Query (LAT_QRY)

SELECT d.id                id,
       l.subs              token
FROM delimited_lists d
CROSS APPLY (
  SELECT Substr (d.list_col, pos + 1, Lead (pos, 1, 4000) OVER (ORDER BY pos) - pos - 1) subs, pos
    FROM (
    SELECT Instr (d.list_col, '|', 1, LEVEL) pos
      FROM DUAL
    CONNECT BY
      LEVEL <= Length (d.list_col) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1
    )
) l

Plan hash value: 631504984

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                 |      1 |        |   5400K|00:15:41.97 |    3009 |       |       |          |
|   1 |  NESTED LOOPS                    |                 |      1 |   3000 |   5400K|00:15:41.97 |    3009 |       |       |          |
|   2 |   TABLE ACCESS FULL              | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |       |       |          |
|   3 |   VIEW                           | VW_LAT_2D0B8FC8 |   3000 |      1 |   5400K|00:02:02.67 |       0 |       |       |          |
|   4 |    WINDOW SORT                   |                 |   3000 |      1 |   5400K|00:02:00.59 |       0 | 43008 | 43008 |38912  (0)|
|   5 |     VIEW                         |                 |   3000 |      1 |   5400K|00:01:58.78 |       0 |       |       |          |
|   6 |      CONNECT BY WITHOUT FILTERING|                 |   3000 |        |   5400K|00:01:48.53 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |       FAST DUAL                  |                 |   3000 |      1 |   3000 |00:00:00.01 |       0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------

Notes on LAT_QRY

This query is taken from Splitting Strings: Proof!, and uses a v12.1 new feature, described with examples in LATERAL Inline Views. The new feature allows you to correlate an inline view directly without the convoluted Multiset code, and can also be used with the keywords CROSS APPLY instead of LATERAL. It's sometimes regarded as having peformance advantages, but in this context we will see that avoiding this correlation altogether is best for performance.

Row-generator Query, Unhinted and Hinted (UNH_QRY and RGN_QRY)

Unhinted Query

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL <= 
            (SELECT Max (Nvl (Length(list_col), 0) - Nvl (Length (Replace (list_col,'|')), 0) + 1)
               FROM delimited_lists)
)
SELECT d.id   id,
       Substr (d.list_col, Instr ('|' || d.list_col, '|', 1, r.rn), Instr (d.list_col || '|', '|', 1, r.rn) - Instr ('|' || d.list_col, '|', 1, r.rn)) token
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col,'|')), 0) + 1

Plan hash value: 747926158

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |      1 |        |   5400K|00:01:55.35 |    2717K|       |       |          |
|   1 |  NESTED LOOPS                  |                 |      1 |    150 |   5400K|00:01:55.35 |    2717K|       |       |          |
|   2 |   VIEW                         |                 |      1 |      1 |   1800 |00:00:07.39 |    1509 |       |       |          |
|   3 |    CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1800 |00:00:07.39 |    1509 |  2048 |  2048 | 2048  (0)|
|   4 |     FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   5 |     SORT AGGREGATE             |                 |      1 |      1 |      1 |00:00:00.06 |    1509 |       |       |          |
|   6 |      TABLE ACCESS FULL         | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  7 |   TABLE ACCESS FULL            | DELIMITED_LISTS |   1800 |    150 |   5400K|00:01:53.61 |    2716K|       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------

Execution Plan with Hint /*+ leading (d) */

Plan hash value: 1241630378

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                 |      1 |        |   5400K|00:00:02.37 |    3018 |       |       |          |
|   1 |  MERGE JOIN                     |                 |      1 |    150 |   5400K|00:00:02.37 |    3018 |       |       |          |
|   2 |   SORT JOIN                     |                 |      1 |   3000 |   3000 |00:00:00.07 |    1509 |    11M|  1318K|   10M (0)|
|   3 |    TABLE ACCESS FULL            | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  4 |   SORT JOIN                     |                 |   3000 |      1 |   5400K|00:00:01.42 |    1509 |   160K|   160K|  142K (0)|
|   5 |    VIEW                         |                 |      1 |      1 |   1800 |00:00:07.37 |    1509 |       |       |          |
|   6 |     CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1800 |00:00:07.37 |    1509 |  2048 |  2048 | 2048  (0)|
|   7 |      FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   8 |      SORT AGGREGATE             |                 |      1 |      1 |      1 |00:00:00.06 |    1509 |       |       |          |
|   9 |       TABLE ACCESS FULL         | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)
       filter(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)

Notes on UNH_QRY and RGN_QRY

I wrote the UNH_QRY query in an attempt to avoid the convoluted Multiset approach of the 'classic' solution. The reason for the use of arrays and Multiset seems to be that, while we need to 'generate' multiple rows for each source row, the number of rows generated has to vary by source record and so the row-generating inline view computes the number of tokens for each record in its where clause.

The use of row-generating subqueries is quite common, but in other cases one often has a fixed number of rows to generate, as in data densification scenarios for example. It occurred to me that, although we don't know the number to generate, we do have an upper bound, dependent on the maximum number of characters, and we could generate that many in a subquery, then join only as many as are required to the source record.

This approach resulted in a simpler and more straightforward query, but it turned out in its initial form to be very slow. The execution plan above shows that the row generator is driving a nested loops join within which a full scan is performed on the table. The CBO is not designed to optimise this type of algorithmic query, so I added a leading hint to reverse the join order, and this resulted in much better performance. In fact, as we see later the hinted query outperforms the other CONNECT BY queries, including the v12.1 LAT_QRY query at all data points considered.

sys_guid Query (GUI_QRY)

WITH guid_cby AS (
  SELECT id, level rn, list_col,Instr ('|' || d.list_col, '|', 1, LEVEL) pos
    FROM delimited_lists d
  CONNECT BY prior id = id and prior sys_guid() is not null and
    LEVEL <= Length (d.list_col) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1
)
SELECT id  id, 
       Substr (list_col, pos, Lead (pos, 1, 4000) OVER (partition by id ORDER BY pos) - pos - 1) token
  FROM guid_cby

Plan hash value: 240527573

-------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |      1 |        |   5400K|00:14:12.07 |   77412 |   2404K|   2404K|       |       |          |         |
|   1 |  WINDOW SORT                   |                 |      1 |   3000 |   5400K|00:14:12.07 |   77412 |   2404K|   2404K|    20G|    45M|  163M (0)|      18M|
|   2 |   VIEW                         |                 |      1 |   3000 |   5400K|00:04:07.47 |    1509 |      0 |      0 |       |       |          |         |
|*  3 |    CONNECT BY WITHOUT FILTERING|                 |      1 |        |   5400K|00:03:55.99 |    1509 |      0 |      0 |    12M|  1343K|   10M (0)|         |
|   4 |     TABLE ACCESS FULL          | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |      0 |      0 |       |       |          |         |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ID"=PRIOR NULL)

Notes on GUI_QRY

This query also generates rows using CONNECT BY, but differs from the others shown by integrating the row-generation code with the main rowset and avoiding a separate subquery against DUAL. This seems to be a more recent approach than the traditional Multiset solution. It uses a trick involving the system function sys_guid() to avoid the 'connect by cycle' error that you would otherwise get, as explained in this OTN thread: Reg : sys_guid()
.

Unfortunately, and despite its current popularity on OTN, it turns out to be even less efficient than the earlier approaches, by quite a margin.

Regex Query (RGX_QRY)

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL < 2000
)
SELECT d.id   id,
       RTrim (Regexp_Substr (d.list_col || '|', '(.*?)\|', 1, r.rn), '|') token
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col,'|')), 0) + 1

Plan hash value: 1537360357

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                 |      1 |        |   5400K|00:00:03.35 |    1509 |       |       |          |
|   1 |  MERGE JOIN                     |                 |      1 |    150 |   5400K|00:00:03.35 |    1509 |       |       |          |
|   2 |   SORT JOIN                     |                 |      1 |   3000 |   3000 |00:00:00.07 |    1509 |    11M|  1318K|   10M (0)|
|   3 |    TABLE ACCESS FULL            | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  4 |   SORT JOIN                     |                 |   3000 |      1 |   5400K|00:00:01.75 |       0 |   160K|   160K|  142K (0)|
|   5 |    VIEW                         |                 |      1 |      1 |   1999 |00:00:00.01 |       0 |       |       |          |
|   6 |     CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1999 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |      FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)
       filter(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)

Notes on RGX_QRY

This query also generates rows using CONNECT BY, but differs from the others shown by using regular expressions to do the token parsing, which is simpler than the Substr/Instr approaches.

This seems to be quite popular, but it's well known that regular expression processing can be bad for performance, and so it proves here, with CPU time increasing quadratically with number of tokens.

XML Query (XML_QRY)

SELECT id   id,
       x2   token
  FROM delimited_lists, XMLTable(
    'if (contains($X2,"|")) then ora:tokenize($X2,"\|") else $X2'
  PASSING list_col AS x2
  COLUMNS x2 VARCHAR2(4000) PATH '.'
)

Plan hash value: 2423482301

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |      1 |        |   5400K|00:00:10.85 |    3009 |
|   1 |  NESTED LOOPS                      |                       |      1 |     24M|   5400K|00:00:10.85 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS       |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |   3000 |   8168 |   5400K|00:00:03.49 |       0 |
----------------------------------------------------------------------------------------------------------------------

Notes on XML_QRY

This query using XMLTable is copied from the version used by Jayesh in the AskTom thread above.

Model Query (MOD_QRY)

SELECT id      id,
       token   token
  FROM delimited_lists
 MODEL
    PARTITION BY (id)
    DIMENSION BY (1 rn)
    MEASURES (CAST('' AS VARCHAR2(4000)) AS token, '|' || list_col || '|' list_col, 2 pos, 0 nxtpos, Length(list_col) + 2 len)
    RULES ITERATE (2000) UNTIL pos[1] > len[1] (
       nxtpos[1] = Instr (list_col[1], '|', pos[1], 1),
       token[iteration_number+1] = Substr (list_col[1], pos[1], nxtpos[1] - pos[1]),
       pos[1] = nxtpos[1] + 1
    )

Plan hash value: 1656081500

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                 |      1 |        |   5400K|03:10:43.97 |    1509 |   2883 |   2883 |       |       |          |
|   1 |  SQL MODEL ORDERED FAST|                 |      1 |   3000 |   5400K|03:10:43.97 |    1509 |   2883 |   2883 |  2047M|   112M| 2844M (1)|
|   2 |   TABLE ACCESS FULL    | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |      0 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Notes on MOD_QRY

I wrote this query using the Model clause, available since Oracle Database version 10, for this article.

The Model clause has something of a reputation for poor performance, and this was one of the slower methods, with CPU time increasing quadratically with number of tokens.

Pipelined Function Query (PLF_QRY)

Pipelined Function Strings.Split

FUNCTION Split (p_string VARCHAR2, p_delim VARCHAR2) RETURN L1_chr_db_arr PIPELINED IS

  c_delim_len   CONSTANT SIMPLE_INTEGER := Length(p_delim);
  l_token_start          SIMPLE_INTEGER := 1;
  l_next_delim           SIMPLE_INTEGER := Instr (p_string, p_delim, l_token_start, 1);

BEGIN

  WHILE l_next_delim > 0 LOOP
    PIPE ROW (Substr (p_string, l_token_start, l_next_delim - l_token_start));
    l_token_start := l_next_delim + c_delim_len;
    l_next_delim := Instr (p_string || p_delim, p_delim, l_token_start, 1);
  END LOOP;

END Split;

Query Using Pipelined Function

SELECT d.id                id,
       s.COLUMN_VALUE      token
  FROM delimited_lists d
 CROSS JOIN TABLE (Strings.Split(d.list_col, '|')) s
Plan hash value: 2608399241

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:03.08 |    3009 |
|   1 |  NESTED LOOPS                      |                 |      1 |     24M|   5400K|00:00:03.08 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| SPLIT           |   3000 |   8168 |   5400K|00:00:01.87 |       0 |
----------------------------------------------------------------------------------------------------------------

Notes on PLF_QRY

This is a fairly well known approach to the problem that involves doing the string splitting within a pipelined database function that is passed the delimited string as a parameter. I wrote my own version for this article, taking care to make only one call to each of the oracle functions Instr and Substr within a loop over the tokens.

The results confirm that it is in fact the fastest approach over all data points considered, and CPU time increased approximately linearly with number of tokens.

With Function v12.1 Query (WFN_QRY)

WITH FUNCTION Split (p_string VARCHAR2, p_delim VARCHAR2) RETURN L1_chr_db_arr IS
  c_delim_len   CONSTANT SIMPLE_INTEGER := Length(p_delim);
  l_token_start          SIMPLE_INTEGER := 1;
  l_next_delim           SIMPLE_INTEGER := Instr (p_string, p_delim, l_token_start, 1);
  l_ret_arr              L1_chr_db_arr := L1_chr_db_arr();

BEGIN

  WHILE l_next_delim > 0 LOOP
    l_ret_arr.EXTEND;
    l_ret_arr(l_ret_arr.COUNT) := Substr (p_string, l_token_start, l_next_delim - l_token_start);
    l_token_start := l_next_delim + c_delim_len;
    l_next_delim := Instr (p_string || p_delim, p_delim, l_token_start, 1);
  END LOOP;
  RETURN l_ret_arr;

END Split;
SELECT d.id                id,
       s.COLUMN_VALUE      token
  FROM delimited_lists d
 CROSS JOIN TABLE (Split(d.list_col, '|')) s

Plan hash value: 2608399241

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:17.56 |    3009 |
|   1 |  NESTED LOOPS                      |                 |      1 |     24M|   5400K|00:00:17.56 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| SPLIT           |   3000 |   8168 |   5400K|00:00:02.57 |       0 |
----------------------------------------------------------------------------------------------------------------

Notes on WFN_QRY

Oracle introduced the ability to include a PL/SQL function definition directly in a query in version 12.1. I converted my pipelined function into a function within a query, returning an array of character strings.

As we would expect from the results of the similar pipelined function approach, this also turns out to be a very efficient solution. However, it may be surprising to many that it is significantly slower (20-30%) than using the separate database function, given the prominence that is usually assigned to context-switching.

Recursive Subquery Factor Query (RSF_QRY)

WITH rsf (id, token, nxtpos, nxtpos2, list_col, len, iter) AS
(
SELECT id,
       Substr (list_col, 1, Instr (list_col || '|', '|', 1, 1) - 1),
       Instr (list_col || '|', '|', 1, 1) + 1,
       Instr (list_col || '|', '|', 1, 2),
       list_col || '|',
       Length (list_col) + 1,
       1
  FROM delimited_lists
UNION ALL
SELECT id,
       Substr (list_col, nxtpos, nxtpos2 - nxtpos),
       nxtpos2 + 1,
       Instr (list_col, '|', nxtpos2 + 1, 1),
       list_col,
       len,
       iter + 1
  FROM rsf r
 WHERE nxtpos <= len
)
SELECT
       id       id,
       token    token
  FROM rsf

Plan hash value: 2159872273

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |   5400K|03:27:56.79 |    1434M|       |       |          |
|   1 |  VIEW                                     |                 |      1 |   6000 |   5400K|03:27:56.79 |    1434M|       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |   5400K|03:27:45.02 |    1434M|    12M|  1343K|   10M (0)|
|   3 |    TABLE ACCESS FULL                      | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|   4 |    RECURSIVE WITH PUMP                    |                 |   1800 |        |   5397K|00:00:04.48 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

Notes on RSF_QRY

Oracle introduced recursive subquery factoring in v11.2, as an Ansi standard approach to SQL recursion, and with greater power than the older CONNECT BY recursion. I wrote the query for this article.

The query turned out to be surprisingly simple in structure, but for large numbers of tokens it was by far the slowest, and CPU time increased quadratically with number of tokens.

Match Recognize Query (RMR_QRY)

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL <= 4000
), char_streams AS (
SELECT d.id, r.rn, Substr (d.list_col || '|', r.rn, 1) chr
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) + 2
), chars_grouped AS (
SELECT *
  FROM char_streams
 MATCH_RECOGNIZE (
   PARTITION BY id
   ORDER BY rn
   MEASURES chr mchr,
            FINAL COUNT(*) n_chrs,
            MATCH_NUMBER() mno
      ALL ROWS PER MATCH
  PATTERN (c*? d)
   DEFINE d AS d.chr = '|'
  ) m
)
SELECT id   id, 
       RTrim (Listagg (chr, '') WITHIN GROUP (ORDER BY rn), '|') token
  FROM chars_grouped
GROUP BY id, mno

Plan hash value: 2782416907

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:52.53 |    6036K|    103K|    103K|       |       |          |         |
|   1 |  SORT GROUP BY                     |                 |      1 |    150 |   5400K|00:00:52.53 |    6036K|    103K|    103K|   330M|  6949K|   97M (1)|     395K|
|   2 |   VIEW                             |                 |      1 |    150 |     10M|00:00:53.55 |    6036K|  52539 |  52539 |       |       |          |         |
|   3 |    MATCH RECOGNIZE SORT            |                 |      1 |    150 |     10M|00:00:51.47 |    6036K|  52539 |  52539 |   231M|  5084K|  163M (1)|         |
|   4 |     VIEW                           |                 |      1 |    150 |     10M|00:00:18.75 |    6036K|      0 |      0 |       |       |          |         |
|   5 |      NESTED LOOPS                  |                 |      1 |    150 |     10M|00:00:11.76 |    6036K|      0 |      0 |       |       |          |         |
|   6 |       VIEW                         |                 |      1 |      1 |   4000 |00:00:00.01 |       0 |      0 |      0 |       |       |          |         |
|   7 |        CONNECT BY WITHOUT FILTERING|                 |      1 |        |   4000 |00:00:00.01 |       0 |      0 |      0 |  2048 |  2048 | 2048  (0)|         |
|   8 |         FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |      0 |       |       |          |         |
|*  9 |       TABLE ACCESS FULL            | DELIMITED_LISTS |   4000 |    150 |     10M|00:00:10.12 |    6036K|      0 |      0 |       |       |          |         |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   9 - filter("R"."RN"<=NVL(LENGTH("D"."LIST_COL"),0)+2)

Notes on RMR_QRY

Oracle introduced Match_Recognize in v12.1, as a mechanism for pattern matching along the lines of regular expressions for strings, but for matching patterns across records. I wrote the query for this article, converting each character in the input strings into a separate record to allow for its use.

This approach might seems somewhat convoluted, and one might expect it to be correspondingly slow. As it turns out though, for most datasets it is faster than many of the other methods, the ones with very long tokens being the exception, and CPU time increased linearly with both number of tokens and number of characters per token. It is notable that, apart from the exception mentioned, it outperformed the regular expression query.

Performance Results

In the tables below, we will include some expressions used in Dimensional Benchmarking of Bracket Parsing SQL:

  • LRTB = Ratio to best time at high data point
  • RTP_L = Linear ratio as defined in the link above, averaged over successive data points
  • RTP_Q = Quadratic ratio as defined in the link above, averaged over successive data points

The CPU times are listed but elapsed times are much the same. Each table has columns in order of increasing last CPU time by query.

Depth fixed, high; width range low: d=18, w=(50,100,150,200)

Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)

Width fixed, low; depth range high: w=5, d=(195,390,585,780)


Width fixed, high; depth range low: w=150, d=(6,12,18,24)

A Note on the Row Generation by Connect By Results

It is interesting to observe that the 'classical' mechanism for row-generation in string-splitting and similar scenarios turns out to be much slower than a simpler approach that removes the correlation of the row-generating subquery. This 'classical' mechanism has been proposed on Oracle forums over many years, while a simpler and faster approach seems to have gone undiscovered. The reason for its performance deficit is simply that running a Connect By query for every master row is unsurprisingly inefficient. The Use of the v12.1 LATERAL correlation syntax simplifies but doesn't improve performance by much.

The more recent approach to Connect By row-generation is to use the sys_guid 'trick' to embed the Connect By in the main query rather than in a correlated subquery, and this has become very popular on forums such as OTN. As we have seen, although simpler, this is even worse for performance: Turning the whole query into a tree-walk isn't good for performance either. It's better to isolate the tree-walk, execute it once, and then just join its result set as in RGN_QRY.

Conclusions

  • The database pipelined function solution (PLF_QRY) is generally the fastest across all data points
  • Using the v12.1 feature of a PL/SQL function embedded within the query is almost always next best, although slower by up to about a third; its being slower than a database function may surprise some
  • Times generally increased uniformly with numbers of tokens, usually either linearly or quadratically
  • Times did not seem to increase so uniformly with token size, except for XML (XML_QRY), Match_Recognize (RMR_QRY) and regular expression (RGX_QRY)
  • For larger numbers of tokens, three methods all showed quadratic variation and were very inefficient: Model (MOD_QRY), regular expression (RGX_QRY), and recursive subquery factors (RSF_QRY)
  • We have highlighted two inefficient but widespread approaches to row-generation by Connect By SQL, and pointed out a better method

These conclusions are based on the string-splitting problem considered, but no doubt would apply to other scenarios involving requirements to split rows into multiple rows based on some form of string-parsing.

Database VersionOracle Database 12c 12.1.0.2.0
OutputBatch_Str
GitHubA Framework for Dimensional Benchmarking of SQL Query Performance
Overview ArticleA Framework for Dimensional Benchmarking of SQL Performance






Benchmarking of Hash Join Options in SQL for Fixed-Depth Hierarchies

In this article, Dimensional Benchmarking of SQL for Fixed-Depth Hierarchies I compared the performance characteristics of three traversal methods, two recursive and one non-recursive, using my own benchmarking package (A Framework for Dimensional Benchmarking of SQL Performance), on a test problem of a fixed level organization structure hierarchy, with 5 levels for performance testing and 3 levels for functional testing.

It is interesting to note that all joins in the execution plan are hash joins, and in the sequence you would expect. The first three are in the default join 'sub-order' that defines whether the joined table or the prior rowset (the default) is used to form the hash table, while the last two are in the reverse order, corresponding to the swap_join_inputs hint. I wrote a short note on that subject, A Note on Oracle Join Orders and Hints, last year, and have now written an article using the largest data point in the current problem to explore performance variation across the possible sub-orders.

This article uses the dimensional performance framework to test all 32 possible combinations in the given sequence of tables.

Notation and Hints

Dot-Bracket Code

In the note mentioned above I found a concise notation for representing a sequence of hash joins. A bracketted expression represents a rowset, and a '.' represents a hash-join between a table and a rowset. As I noted there the standard idea of join order does not fully specify the hash join, and I found it convenient to use the idea of outer-level order - the sequence in which tables are joined to each other - and 'inner-level order' - which of the two factors goes on the 'left' side of the join, to form the hash table.

As Oracle documentation refers to the idea of swapping join inputs in a hash join to mean making the table the hash table, i.e. the left side of the join, we can think of that as being the non-default inner order.

When joining table c to a prior rowset, say (a.b), the default with (a.b) forming the hash table on the left is represented by:

(a.b).c

This can be forced by the hints use_hash(c) and no_swap_join_inputs(c).

In our example, defaulting each inner-join order is represented by:

((((s1.o1).s2).s3).s4).o5

The non-default is represented by:

c.(a.b)

This can be forced by the hints use_hash(c) and swap_join_inputs(c).

In our example, undefaulting each inner-join order is represented by:

o5.(s4.(s3.(s2.(o1.s1))))

Binary Code

In enumerating all combinations, it is convenient to represent a combination by 5 0/1 digits. I use 0 to mean the default and 1 to mean the non-default. Therefore

  • 00000 is equivalent to ((((s1.o1).s2).s3).s4).o5
  • 11111 is equivalent to o5.(s4.(s3.(s2.(o1.s1))))

Hints

To ensure that the desired options are followed, we need three types of hint:

  • leading - to guarentee sequence of outer-level ordering
  • use_hash - to ensure hash joins
  • swap_join_inputs/no_swap_join_inputs - to force swap or no swap on the inner-order

As mentioned in my earlier note, the inner-order on the first two tables can only be forced by the leading hint.

Graph of CPU, ElapsedTime, and disk_writes by Plan


From the graph we observe that a step-change in performance occurs between plan 16 and 17, with the second half of the plans consuming around half as much elapsed time as the first half, and that this correlates very strongly with disk accesses. There is also a roughly 1 second increase in CPU time in the second half, which is no doubt a knock-on effect of the disk access processing.

It's clear that the final join to orgs with alias o5 is much faster when applied in the 'swapped' inner-order. It is well known that hash joins become much slower when the size of the hash table becomes too large for it to fit into memory and it has to spill to disk, and this is presumably what is happening here. We can see in my earlier article that the CBO, left to itself, in fact chooses one of the better plans.

Execution Plans for H01, H16, H17, H32

H01_QRY

  • Dot-Bracket Code: ((((s1.o1).s2).s3).s4).o5
  • Binary Code: 00000
  • Hints: leading(s1 o1 s2 s3 s4) use_hash(o1 s2 s3 s4 o5) no_swap_join_inputs(s2) no_swap_join_inputs(s3) no_swap_join_inputs(s4) no_swap_join_inputs(o5)
Plan hash value: 892313792

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:19.68 |     782 |  96903 |  96872 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:19.68 |     782 |  96903 |  96872 |   454M|  7027K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:13.64 |     778 |  45725 |  45694 |   458M|    17M|  137M (1)|     372K|
|*  3 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.38 |     771 |      0 |      0 |  4258K|  1196K| 6842K (0)|         |
|*  4 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.01 |     580 |      0 |      0 |   973K|   973K| 1314K (0)|         |
|*  5 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     389 |      0 |      0 |  1126K|  1126K| 1250K (0)|         |
|*  6 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     198 |      0 |      0 |  1214K|  1214K| 1130K (0)|         |
|*  7 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|   8 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|   9 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  10 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  11 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  12 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   3 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   4 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   5 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
   6 - access("O1"."ID"="S1"."ORG_ID")
   7 - filter("S1"."STRUCT_LEVEL"=1)

H16_QRY

  • Dot-Bracket Code: (s4.(s3.(s2.(o1.s1)))).o5
  • Binary Code: 11110
  • Hints: leading(o1 s1 s2 s3 s4) use_hash(s1 s2 s3 s4 o5) swap_join_inputs(s2) swap_join_inputs(s3) swap_join_inputs(s4) no_swap_join_inputs(o5)
Plan hash value: 1359118369

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:31.46 |     782 |  96902 |  96871 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:31.46 |     782 |  96902 |  96871 |   453M|  7021K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:24.45 |     778 |  45725 |  45694 |   458M|    17M|  135M (1)|     371K|
|*  3 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.49 |     771 |      0 |      0 |  2764K|  1580K| 4059K (0)|         |
|   4 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.01 |     580 |      0 |      0 |  2764K|  1580K| 4104K (0)|         |
|   6 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|*  7 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     389 |      0 |      0 |  2764K|  1580K| 4104K (0)|         |
|   8 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|*  9 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     198 |      0 |      0 |  1519K|  1519K| 1546K (0)|         |
|  10 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|* 11 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  12 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   3 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   5 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   7 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
   9 - access("O1"."ID"="S1"."ORG_ID")
  11 - filter("S1"."STRUCT_LEVEL"=1)

H17_QRY

  • Dot-Bracket Code: o5.((((s1.o1).s2).s3).s4)
  • Binary Code: 00001
  • Hints: leading(s1 o1 s2 s3 s4) use_hash(o1 s2 s3 s4 o5) no_swap_join_inputs(s2) no_swap_join_inputs(s3) no_swap_join_inputs(s4) swap_join_inputs(o5)
Plan hash value: 2901541859

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:12.65 |     782 |  51181 |  51181 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:12.65 |     782 |  51181 |  51181 |   454M|  7031K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:02.58 |     778 |      0 |      0 |  1519K|  1519K| 1435K (0)|         |
|   3 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.34 |     771 |      0 |      0 |  4258K|  1196K| 6851K (0)|         |
|*  5 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.01 |     580 |      0 |      0 |   973K|   973K| 1345K (0)|         |
|*  6 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     389 |      0 |      0 |  1126K|  1126K| 1260K (0)|         |
|*  7 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     198 |      0 |      0 |  1214K|  1214K| 1138K (0)|         |
|*  8 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|   9 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|  10 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  11 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|  12 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   4 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   5 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   6 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
   7 - access("O1"."ID"="S1"."ORG_ID")
   8 - filter("S1"."STRUCT_LEVEL"=1)

H32_QRY

  • Dot-Bracket Code: o5.(s4.(s3.(s2.(o1.s1))))
  • Binary Code: 11111
  • Hints: leading(o1 s1 s2 s3 s4) use_hash(s1 s2 s3 s4 o5) swap_join_inputs(s2) swap_join_inputs(s3) swap_join_inputs(s4) swap_join_inputs(o5)
Plan hash value: 1006677044

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:13.57 |     782 |  51173 |  51173 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:13.57 |     782 |  51173 |  51173 |   454M|  7030K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:01.30 |     778 |      0 |      0 |  1519K|  1519K| 1563K (0)|         |
|   3 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.29 |     771 |      0 |      0 |  2764K|  1580K| 4085K (0)|         |
|   5 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|*  6 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.01 |     580 |      0 |      0 |  2764K|  1580K| 4102K (0)|         |
|   7 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|*  8 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     389 |      0 |      0 |  2764K|  1580K| 4102K (0)|         |
|   9 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
|* 10 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     198 |      0 |      0 |  1519K|  1519K| 1547K (0)|         |
|  11 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|* 12 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     191 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   4 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   6 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   8 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
  10 - access("O1"."ID"="S1"."ORG_ID")
  12 - filter("S1"."STRUCT_LEVEL"=1)
Database VersionOracle Database 12c 12.1.0.2.0
OutputBatch_Org_Hints
GitHubA Framework for Dimensional Benchmarking of SQL Query Performance
Overview ArticleA Framework for Dimensional Benchmarking of SQL Performance






Dimensional Benchmarking of SQL for Fixed-Depth Hierarchies

In my recent article,Dimensional Benchmarking of Bracket Parsing SQL I showed how it was much more efficient to to solve a particular database querying problem using a database function than by two other pure SQL methods. I have also written articles using recursive PL/SQL functions to traverse network or hierarchical data structures, such as PL/SQL Pipelined Function for Network Analysis.

Networks or hierarchies of arbitrary depth are difficult to traverse in SQL without using recursion. However, there also exist hierarchies of fixed and fairly small depths, and these can be traversed either recursively or by a sequence of joins for each of the levels. In this article I compare the performance characteristics of three traversal methods, two recursive and one non-recursive, using my own benchmarking package (A Framework for Dimensional Benchmarking of SQL Performance), on a test problem of a fixed level organization structure hierarchy, with 5 levels for performance testing and 3 levels for functional testing.

The three queries tested were:

  • JNS_QRY: Sequence of joins
  • PLF_QRY: Recursive pipelined function
  • RSF_QRY: Recursive subquery factors

Fixed Level Hierarchy Problem Definition

A hierarchy is assumed in which there are a number of root records, and at each level a parent can have multiple child records and a child can also have multiple parents. Each level in the hierarchy corresponds to an entity of a particular type. Each parent-child record is associated with a numerical factor, and the products of these propagate down the levels.

The problem considered is to report all root/leaf combinations with their associated products. There may of course be multiple paths between any root and leaf, and in a real world example one would likely want to aggregate. However, in order to keep it simple and focus on the traversal performance, I do not perform any aggregation.

Test Data Structure

Tables

CREATE TABLE orgs ( id              NUMBER NOT NULL, 
                    org_level       NUMBER NOT NULL, 
                    org_name        VARCHAR2(100) NOT NULL,
                    CONSTRAINT      org_pk PRIMARY KEY (id))
/
DROP TABLE org_structure
/
CREATE TABLE org_structure (
                    id              NUMBER NOT NULL, 
                    struct_level    NUMBER NOT NULL, 
                    org_id          NUMBER NOT NULL, 
                    child_org_id    NUMBER NOT NULL,
                    fact            NUMBER,
                    CONSTRAINT      ost_pk PRIMARY KEY (id))
/
CREATE INDEX ost_N1 ON org_structure (org_id)
/
CREATE INDEX ost_N2 ON org_structure (child_org_id)
/

Functional Test Data

To simplify functional validation a 3-level hierarchy was taken, with a relatively small number of records. The functional test data were generated by the same automated approach used for performance testing. The fact number was obtained as a random number betwee 0 and 1, and to keep it simple, duplicate pairs were permitted.

The test data were parametrised by width and depth as follows (the exact logic is a little complicated, but can be seen in the code itself):

  • Width corresponds to a percentage increase in the number of child entities relative to the number of parents
  • Depth corresponds to the proportion of the parent entity records a child is (randomly) assigned. Each child has a minimum of 1 parent (lowest depth), and a maximum of all parent entities (highest depth)
Test Data

orgs

        ID  ORG_LEVEL ORG_NAME
---------- ---------- ----------------------------------------------------------------------------------------------------
         1          1 L1 Org 1
         2            L1 Org 2
         3            L1 Org 3
         4          2 L2 Org 1
         5            L2 Org 2
         6            L2 Org 3
         7            L2 Org 4
         8            L2 Org 5
         9            L2 Org 6
        10          3 L3 Org 1
        11            L3 Org 2
        12            L3 Org 3
        13            L3 Org 4
        14            L3 Org 5
        15            L3 Org 6
        16            L3 Org 7
        17            L3 Org 8
        18            L3 Org 9
        19            L3 Org 10
        20            L3 Org 11
        21            L3 Org 12

21 rows selected.

org_structure

        ID STRUCT_LEVEL     ORG_ID CHILD_ORG_ID       FACT
---------- ------------ ---------- ------------ ----------
        25            1          2            4 .765337854
        26                       1            5 .157198428
        27                       2            6 .012739872
        28                       3            7  .75268798
        29                       2            8 .647269295
        30                       2            9 .972586624
         1            2          6           10 .290389829
         2                       7           10 .717844734
         3                       6           11 .909068079
         4                       7           11 .876644977
         5                       9           12  .93576597
         6                       6           12 .097462542
         7                       8           13 .316926046
         8                       8           13 .169842496
         9                       6           14 .765946795
        10                       4           14 .831552357
        11                       8           15 .110940017
        12                       7           15 .295163716
        13                       5           16 .171097557
        14                       5           16 .827432202
        15                       7           17 .339382023
        16                       7           17 .644889466
        17                       7           18 .955594058
        18                       5           18 .668546163
        19                       7           19 .785709973
        20                       6           19 .507321616
        21                       8           20 .511548918
        22                       7           20 .523510327
        23                       6           21 .242612715
        24                       5           21 .561006179

30 rows selected.

Result

ROOT_ORG   LEAF_ORG   FACT_PRODUCT
---------- ---------- ------------
L1 Org 1   L3 Org 12          0.09
L1 Org 1   L3 Org 7           0.03
L1 Org 1   L3 Org 7           0.13
L1 Org 1   L3 Org 9           0.11
L1 Org 2   L3 Org 1           0.00
L1 Org 2   L3 Org 10          0.01
L1 Org 2   L3 Org 11          0.33
L1 Org 2   L3 Org 12          0.00
L1 Org 2   L3 Org 2           0.01
L1 Org 2   L3 Org 3           0.00
L1 Org 2   L3 Org 3           0.91
L1 Org 2   L3 Org 4           0.11
L1 Org 2   L3 Org 4           0.21
L1 Org 2   L3 Org 5           0.01
L1 Org 2   L3 Org 5           0.64
L1 Org 2   L3 Org 6           0.07
L1 Org 3   L3 Org 1           0.54
L1 Org 3   L3 Org 10          0.59
L1 Org 3   L3 Org 11          0.39
L1 Org 3   L3 Org 2           0.66
L1 Org 3   L3 Org 6           0.22
L1 Org 3   L3 Org 8           0.26
L1 Org 3   L3 Org 8           0.49
L1 Org 3   L3 Org 9           0.72

24 rows selected.

All queries returned the expected results above.

Performance Test Data

The performance test data were created in the same way as the functional test data, but with 5 levels, and with 10 root organizations.

The test data sets used a grid of width and depth values of (100, 120, 140, 160, 180), which resulted in output records as below:

Sequence of joins (JNS_QRY)

SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       s1.fact * s2.fact * s3.fact * s4.fact fact_product
  FROM org_structure s1
  JOIN org_structure s2
    ON s2.org_id = s1.child_org_id
  JOIN org_structure s3
    ON s3.org_id = s2.child_org_id
  JOIN org_structure s4
    ON s4.org_id = s3.child_org_id
  JOIN orgs o1
    ON o1.id = s1.org_id
  JOIN orgs o5
    ON o5.id = s4.child_org_id
 WHERE s1.struct_level = 1
 ORDER BY o1.org_name, o5.org_name, s1.fact * s2.fact * s3.fact * s4.fact

Plan hash value: 914261573

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:11.12 |     718 |  51157 |  51157 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:11.12 |     718 |  51157 |  51157 |   454M|  7031K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:01.76 |     714 |      0 |      0 |  1483K|  1483K| 1524K (0)|         |
|   3 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.45 |     707 |      0 |      0 |  2733K|  1562K| 4103K (0)|         |
|   5 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|*  6 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.02 |     532 |      0 |      0 |   917K|   917K| 3834K (0)|         |
|*  7 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     357 |      0 |      0 |  1062K|  1062K| 1260K (0)|         |
|*  8 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     182 |      0 |      0 |  1185K|  1185K| 1184K (0)|         |
|*  9 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|  10 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|  11 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|  12 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   4 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   6 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   7 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
   8 - access("O1"."ID"="S1"."ORG_ID")
   9 - filter("S1"."STRUCT_LEVEL"=1)

Note
-----
   - this is an adaptive plan

Notes on JNS_QRY

It is interesting to note that all joins in the execution plan are hash joins, and in the sequence you would expect. The first three are in the default join 'sub-order' that defines whether the joined table or the prior rowset (the default) is used to form the hash table, while the last two are in the reverse order, corresponding to the swap_join_inputs hint. I wrote a short note on that subject, A Note on Oracle Join Orders and Hints, last year, and have now written an article using the largest data point in the current problem to explore performance variation across the possible sub-orders.

Recursive pipelined function (PLF_QRY)

CREATE OR REPLACE TYPE org_struct_rec_type IS OBJECT (struct_level NUMBER, org_id NUMBER, fact_product NUMBER);
/
CREATE TYPE org_struct_lis_type IS VARRAY(32767) OF org_struct_rec_type;
/
CREATE OR REPLACE FUNCTION Org_Products (p_org_id PLS_INTEGER, p_fact_product NUMBER) RETURN org_struct_lis_type PIPELINED IS
  l_org_struct_lis  org_struct_lis_type;
BEGIN

  FOR rec_org_struct IN (
      SELECT child_org_id,
             p_fact_product * fact fact_product,
             struct_level
      FROM org_structure
      WHERE org_id = p_org_id) LOOP

    PIPE ROW (org_struct_rec_type (rec_org_struct.struct_level, rec_org_struct.child_org_id, rec_org_struct.fact_product));

    FOR rec_org_struct_child IN (SELECT struct_level, org_id, fact_product FROM TABLE (Org_Products (rec_org_struct.child_org_id, rec_org_struct.fact_product))) LOOP

      PIPE ROW (org_struct_rec_type (rec_org_struct_child.struct_level, rec_org_struct_child.org_id, rec_org_struct_child.fact_product));

    END LOOP;

  END LOOP;

END  Org_Products;

SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       t.fact_product fact_product
  FROM org_structure s
  CROSS APPLY TABLE (Org_Products (s.child_org_id, s.fact)) t
  JOIN orgs o1
    ON o1.id = s.org_id
  JOIN orgs o5
    ON o5.id = t.org_id
 WHERE s.struct_level = 1
   AND t.struct_level = 4
 ORDER BY o1.org_name, o5.org_name, t.fact_product

Plan hash value: 1216100769

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |               |      1 |        |   3330K|00:03:38.10 |    9072K|  51163 |  51163 |       |       |          |         |
|   1 |  SORT ORDER BY                       |               |      1 |   4574 |   3330K|00:03:38.10 |    9072K|  51163 |  51163 |   455M|  7037K|  163M (1)|     400K|
|*  2 |   HASH JOIN                          |               |      1 |   4574 |   3330K|00:03:15.00 |    9072K|      0 |      0 |  1483K|  1483K| 1489K (0)|         |
|   3 |    TABLE ACCESS FULL                 | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|   4 |    NESTED LOOPS                      |               |      1 |   4574 |   3330K|00:03:13.28 |    9072K|      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                        |               |      1 |     56 |     56 |00:00:00.01 |     182 |      0 |      0 |  1160K|  1160K| 1199K (0)|         |
|*  6 |      TABLE ACCESS FULL               | ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL               | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  8 |     COLLECTION ITERATOR PICKLER FETCH| ORG_PRODUCTS  |     56 |     82 |   3330K|00:03:12.84 |    9072K|      0 |      0 |       |       |          |         |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"=VALUE(KOKBF$))
   5 - access("O1"."ID"="S"."ORG_ID")
   6 - filter("S"."STRUCT_LEVEL"=1)
   8 - filter(VALUE(KOKBF$)=4)

Outer Loop Function Query - Explan Plan only

SELECT child_org_id,
       fact fact_product,
       struct_level
  FROM org_structure
  WHERE org_id = 1

Query Plan
---------------------------------------------------
SELECT STATEMENT   Cost = 41
  TABLE ACCESS BY INDEX ROWID BATCHED ORG_STRUCTURE
    INDEX RANGE SCAN OST_N1

Notes on PLF_QRY

For simplicity a stand-alone database function was used here. The query execution plan was obtained by the benchmarking framework and the highest data point plan listed. The query within the function was extracted and an explain Plan performed manually, which showed the expected index range scan.

Recursive subquery factors (RSF_QRY)

WITH rsf (root_org_id, child_org_id, fact_product, lev) AS
(
SELECT org_id, child_org_id, fact, 1
  FROM org_structure
 WHERE struct_level = 1
UNION ALL
SELECT r.root_org_id,
       s.child_org_id,
       r.fact_product * s.fact,
       r.lev + 1
  FROM rsf r
  JOIN org_structure s
    ON s.org_id = r.child_org_id
)
SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       r.fact_product fact_product
  FROM rsf r
  JOIN orgs o1
    ON o1.id = r.root_org_id
  JOIN orgs o5
    ON o5.id = r.child_org_id
 WHERE r.lev = 4
 ORDER BY o1.org_name, o5.org_name, r.fact_product

Plan hash value: 248371385

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |               |      1 |        |   3330K|00:00:55.92 |      39M|  73843 |  93808 |       |       |          |         |
|   1 |  SORT ORDER BY                               |               |      1 |   4631 |   3330K|00:00:55.92 |      39M|  73843 |  93808 |   454M|  7030K|  162M (1)|     400K|
|*  2 |   HASH JOIN                                  |               |      1 |   4631 |   3330K|00:00:45.06 |      39M|  22678 |  42643 |  1519K|  1519K| 1555K (0)|         |
|   3 |    TABLE ACCESS FULL                         | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN                                 |               |      1 |   4631 |   3330K|00:00:43.14 |      39M|  22678 |  42643 |  1519K|  1519K| 1546K (0)|         |
|   5 |     TABLE ACCESS FULL                        | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  6 |     VIEW                                     |               |      1 |   4631 |   3330K|00:00:41.58 |      39M|  22678 |  42643 |       |       |          |         |
|   7 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|               |      1 |        |   3361K|00:00:40.72 |      39M|  22678 |  42643 |   173M|  4426K|   97M (0)|         |
|*  8 |       TABLE ACCESS FULL                      | ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|*  9 |       HASH JOIN                              |               |      4 |   4575 |   3361K|00:00:06.43 |     701 |  22678 |  22935 |   282M|    10M|   56M (1)|     191K|
|  10 |        RECURSIVE WITH PUMP                   |               |      4 |        |   3361K|00:00:00.56 |       1 |  19708 |      0 |       |       |          |         |
|  11 |        TABLE ACCESS FULL                     | ORG_STRUCTURE |      4 |  27288 |    109K|00:00:00.02 |     700 |      0 |      0 |       |       |          |         |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="R"."CHILD_ORG_ID")
   4 - access("O1"."ID"="R"."ROOT_ORG_ID")
   6 - filter("R"."LEV"=4)
   8 - filter("STRUCT_LEVEL"=1)
   9 - access("S"."ORG_ID"="R"."CHILD_ORG_ID")

Note
-----
   - this is an adaptive plan

Notes on RSF_QRY

This uses a v11.2 feature.

Performance Testing Results

Deep Slice Elapsed Times [d=180, w=(100, 120, 140, 160, 180)]

  • JNS_QRY is faster than RSF_QRY, which is faster than PLF_QRY at all data points
  • PLF_QRY tracks the number of output records very closely. This is likely because the function executes a query at every node in the hierarchy that uses an indexed search.
  • The pure SQL methods scale better through being able to do full table scans, and avoiding multiple query executions

Deep Slice Elapsed - CPU Times

The elapsed time minus the CPU times are shown in the first graph below, followed by the disk writes. The disk writes (and reads) are computed as the maximum values across the explain plan at the given data point, and are obtained from the system view v$sql_plan_statistics_all. The benchmarking framework gathers these and other statistics automatically.

  • The graphs show how the elapsed time minus CPU times track the disk accesses reasonably well
  • RSF_QRY does nearly twice as much disk writes as the other two

Wide Slice Results [w=180, d=(100, 120, 140, 160, 180)]

The performance characteristics of the three methods across the wide slice data points are pretty similar to those across the deep slice. The graphs are shown below.

Conclusions

  • For the example problem taken, the most efficient way to traverse fixed-level hierarchies is by a sequence of joins
  • Recursive methods are significantly worse, and the recursive function is especially inefficient because it performs large numbers of query executions using indexed searches, instead of full scans
  • The execution plans for the join sequence query gives an example of a sequence of hash joins with different choices of 'join inputs'. It may be interesting to explore the different performance characteristics of the possible choices using hints in a subsequent article (Benchmarking of Hash Join Options in SQL for Fixed-Depth Hierarchies)
  • The output log is attached, and all code is on my GitHub project, GitHub: dim_bench_sql_oracle

Batch_Org






Dimensional Benchmarking of Bracket Parsing SQL

I noticed an interesting thread on OTN recently, Matching ( in a string. It's about using SQL to find matching bracket pairs (technically 'parentheses' but 'brackets' is shorter, and it makes no difference to the SQL). Incidentally, I found recently that nested bracket expressions are a nice way of compactly representing the structure of complex hash join execution plans, A Note on Oracle Join Orders and Hints.

I thought it would be interesting to run some of the solution queries through my benchmarking package to test performance (A Framework for Dimensional Benchmarking of SQL Performance). I decided to consider only the queries that addressed multiple records, and the form of the problem that requires returning record uid, opening and closing bracket positions, plus the substring enclosed. These were by 'mathguy' and 'Peter vd Zwan', and I made very minor tweaks for consistency. I also wrote a query myself using PL/SQL in an inline SQL function using the new (v12.1) 'WITH Function' functionality, and copied this to a version using a pipelined database function to check for any performance differences. The four queries tested were then:

  • CBL_QRY, mathguy: Connect By, Analytics, Regex
  • MRB_QRY, Peter vd Zwan: Connect By, Match_Recognize
  • WFB_QRY, me: With PL/SQL Function, Arrays
  • PFB_QRY, me: Pipelined PL/SQL Function, Arrays

Bracket Pair Definition

Consider a function (BrDiff) defined at each character position as the difference between the number of opening and closing brackets to the left of, or at, that position.

A closing bracket closes an opening bracket if it is the first closing bracket where BrDiff is 1 less than BrDiff at the opening bracket. If all brackets are in some pair, then the expression can be considered well-formed.

This can be illustrated with a diagram for the fourth functional example below.

Test Problem

BRACKET_STRINGS Table

CREATE TABLE bracket_strings (id NUMBER, str VARCHAR2(4000))
/

Functional Test Data

I took four of mathguy's test records, excluding the (deliberately) badly-formed strings, and which included some embedded returns:

Test Data

     ID STR
------- ----------------------------------------
      1  ((Hello ( Hi Hi hi ( A B C ( D)) (EF)
        why Whwy whyhhh )
        )
        )

      2 (1+3*(3-1) + 3*(2+1))
      3 ()()*(())a()(())
      4 b0(b1(b2(b3(x))(xy)))

Result

     ID      O_POS      C_POS STR
------- ---------- ---------- ----------------------------------------
      1          2         60 ((Hello ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )
                              )
                              )

      1          3         58 (Hello ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )
                              )

      1         10         56 ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )

      1         21         33 ( A B C ( D))
      1         29         32 ( D)
      1         35         38 (EF)
      2          1         21 (1+3*(3-1) + 3*(2+1))
      2          6         10 (3-1)
      2         16         20 (2+1)
      3          1          2 ()
      3          3          4 ()
      3          6          9 (())
      3          7          8 ()
      3         11         12 ()
      3         13         16 (())
      3         14         15 ()
      4          3         21 (b1(b2(b3(x))(xy)))
      4          6         20 (b2(b3(x))(xy))
      4          9         15 (b3(x))
      4         12         14 (x)
      4         16         19 (xy)

21 rows selected.

All queries returned the expected results above.

Performance Test Data

Each test set consisted of 100 records with the str column containing the brackets expression dependent on width (w) and depth (d) parameters, as follows:

  • Each str column contains w bracket pairs
  • The str column begins with a 3-character record number
  • After the record number, the str column begins with d opening brackets with 3 characters of text, like: '(001', etc., followed by the d closing brackets, then the remaining w-d pairs in an unnested sequence, like '(001)'

When w=d the pairs are fully nested, and when d=0 there is no nesting, just a sequence of '(123)' stringss.

This choice of test data sets allows us to see if both number of brackets, and bracket nesting have any effect on performance.

  • Depth fixed, at small width point; width varies: d=100, w=(100, 200, 300, 400)
  • Width fixed at high depth point; depth varies: w=400, d=(0, 100, 200, 300, 400)

The output from the test queries therefore consists of 100*w records with a record identifier and a bracketed string. For performance testing purposes the benchmarking framework writes the results to a file in csv format, while counting only the query steps in the query timing results.

All the queries showed strong time correlation with width, with smaller correlation with depth.

Connect By, Analytics, Regex Query (CBL_QRY)

WITH    d ( id, str, pos ) as (
      select id, str, regexp_instr(str, '', 1, level)
      from   bracket_strings
      connect by level <= length(str) - length(translate(str, 'x()', 'x'))
             and prior id = id
             and prior sys_guid() is not null
    ),
    p ( id, str, pos, flag, l_ct, ct ) as (
      select id, str, pos, case substr(str, pos, 1) when '(' then 1 else -1 end,
             sum(case substr(str, pos, 1) when '(' then 1         end) over (partition by id order by pos),
             sum(case substr(str, pos, 1) when '(' then 1 else -1 end) over (partition by id order by pos)
      from   d
    ),
    f ( id, str, pos, flag, l_ct, ct, o_ct ) as (
      select id, str, pos, flag, l_ct, ct + case flag when 1 then 0 else 1 end as ct,
             row_number() over (partition by id, flag, ct order by pos)
      from   p
    )
select   /*+ CBL_QRY gather_plan_statistics */ id,
        min(case when flag =  1 then pos end) as o_pos,
        min(case when flag = -1 then pos end) as c_pos,
                                Substr (str, min(case when flag =  1 then pos end), min(case when flag = -1 then pos end) - min(case when flag =  1 then pos end) + 1) str
from    f
group by id, str, ct, o_ct
order by 1, 2

Plan hash value: 2736674058

------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |  40000 |00:00:16.30 |      43 |  40000 |  40000 |       |       |          |         |
|   1 |  SORT ORDER BY                      |                 |      1 |    100 |  40000 |00:00:16.30 |      43 |  40000 |  40000 |    21M|  1702K|   19M (0)|         |
|   2 |   HASH GROUP BY                     |                 |      1 |    100 |  40000 |00:00:16.24 |      43 |  40000 |  40000 |    85M|  7293K|   85M (0)|         |
|   3 |    VIEW                             |                 |      1 |    100 |  80000 |00:01:19.35 |      43 |  40000 |  40000 |       |       |          |         |
|   4 |     WINDOW SORT                     |                 |      1 |    100 |  80000 |00:01:19.27 |      43 |  40000 |  40000 |   175M|  4458K|   97M (1)|     157K|
|   5 |      VIEW                           |                 |      1 |    100 |  80000 |00:01:05.90 |      40 |  20000 |  20000 |       |       |          |         |
|   6 |       WINDOW SORT                   |                 |      1 |    100 |  80000 |00:01:05.86 |      40 |  20000 |  20000 |   175M|  4457K|   97M (0)|     157K|
|   7 |        VIEW                         |                 |      1 |    100 |  80000 |00:00:09.77 |      38 |      0 |      0 |       |       |          |         |
|*  8 |         CONNECT BY WITHOUT FILTERING|                 |      1 |        |  80000 |00:00:02.26 |      38 |      0 |      0 |   267K|   267K|  237K (0)|         |
|   9 |          TABLE ACCESS FULL          | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |      0 |      0 |       |       |          |         |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access("ID"=PRIOR NULL)

Notes on CBL_QRY

Subquery d uses regexp_instr with connect by to generate rows for each bracket.

Connect By, Match_Recognize Query (MRB_QRY)

WITH b as
(
select
  substr(str,level,1) s
  ,level n
  ,id
  ,str
from
  bracket_strings
connect by
id =  prior id
and substr(str,level,1) is not null
and prior sys_guid() is not null
)
select  /*+ MRB_QRY gather_plan_statistics */ 
  id
  ,o_pos
  ,c_pos
  ,substr(str,o_pos,c_pos - o_pos + 1) str
from
b
MATCH_RECOGNIZE (
partition by id
ORDER BY n
MEASURES 
  str as str
  ,FIRST( N) AS o_pos
  ,LAST( N) AS c_pos
one ROW PER MATCH
AFTER MATCH SKIP to next row
PATTERN (ob (ob | nb | cb)*? lcb)
DEFINE
  ob as ob.s = '('
  ,cb as cb.s = ')'
  ,nb as nb.s not in ('(',')')
  ,lcb as lcb.s = ')' and (count(ob.s) = count(cb.s) + 1)
) MR

Plan hash value: 2214599770

-----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                 |      1 |        |  40000 |00:03:38.26 |      40 |   4473K|  50075 |       |       |          |
|   1 |  SORT ORDER BY                   |                 |      1 |    100 |  40000 |00:03:38.26 |      40 |   4473K|  50075 |    21M|  1702K|   19M (0)|
|   2 |   VIEW                           |                 |      1 |    100 |  40000 |00:04:48.17 |      40 |   4473K|  50075 |       |       |          |
|   3 |    MATCH RECOGNIZE SORT          |                 |      1 |    100 |  40000 |00:04:48.16 |      40 |   4473K|  50075 |   440M|  6922K|  163M (0)|
|   4 |     VIEW                         |                 |      1 |    100 |    200K|00:00:04.65 |      38 |      0 |      0 |       |       |          |
|*  5 |      CONNECT BY WITHOUT FILTERING|                 |      1 |        |    200K|00:00:04.54 |      38 |      0 |      0 |   267K|   267K|  237K (0)|
|   6 |       TABLE ACCESS FULL          | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("ID"=PRIOR NULL)

Notes on MRB_QRY

This uses the v12.1 feature Match_Recognize operating on the charcaters in the string after conversion to rows.

With PL/SQL Function, Arrays Query (WFB_QRY)

WITH FUNCTION Parse_Brackets (p_str VARCHAR2) RETURN bra_lis_type IS /* WFB_QRY */ 
  c_n_ob       CONSTANT PLS_INTEGER := Length (p_str) - Length (Replace (p_str, '(', ''));
  l_ob_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  l_cb_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  TYPE b_rec_type   IS  RECORD (pos INTEGER, diff INTEGER);
  TYPE b_lis_type   IS  VARRAY(32767) OF b_rec_type;
  l_b_lis               b_lis_type := b_lis_type(NULL);
  l_bra_lis             bra_lis_type := bra_lis_type();
  n_b                   PLS_INTEGER := 0;
  n_ob                  PLS_INTEGER := 0;
  n_cb                  PLS_INTEGER := 0;
  l_chr                 VARCHAR2(1);
  l_o_diff              PLS_INTEGER;
BEGIN

  IF c_n_ob = 0 THEN
    RETURN NULL;
  END IF;
  l_ob_lis.EXTEND (c_n_ob);
  l_bra_lis.EXTEND (c_n_ob);
  l_cb_lis.EXTEND (c_n_ob);
  l_b_lis.EXTEND (c_n_ob + c_n_ob);
 
  FOR i IN 1..Length (p_str) LOOP
 
    l_chr := Substr (p_str, i, 1);
    IF l_chr NOT IN ('(', ')') THEN CONTINUE; END IF;

    n_b := n_b + 1;
    l_b_lis(n_b).pos := i;
 
    IF l_chr = '(' THEN
      n_ob := n_ob + 1;
      l_ob_lis(n_ob) := n_b;
    ELSE
      n_cb := n_cb + 1;
      l_cb_lis(n_cb) := n_b;
    END IF;

    l_b_lis(n_b).diff := n_ob - n_cb;
 
  END LOOP;

  FOR i IN 1..n_ob LOOP

    l_o_diff := l_b_lis (l_ob_lis(i)).diff;
    FOR j IN 1..n_cb LOOP

      IF l_b_lis (l_cb_lis(j)).pos < l_b_lis (l_ob_lis(i)).pos THEN CONTINUE; END IF;
      IF l_o_diff = l_b_lis (l_cb_lis(j)).diff + 1 THEN

        l_bra_lis(i) := bra_rec_type (l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos, Substr (p_str, l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos - l_b_lis(l_ob_lis(i)).pos + 1));
        EXIT;

      END IF;

    END LOOP;
 
  END LOOP;
  RETURN l_bra_lis;

END;
SELECT
    b.id, t.o_pos, t.c_pos, t.str
  FROM bracket_strings b
  OUTER APPLY TABLE (Parse_Brackets (b.str)) t
 ORDER BY 1, 2

Notes on WFB_QRY

This uses the v12.1 feature whereby a PL/SQL function can be included directly in a query.

Pipelined PL/SQL Function, Arrays Query (PFB_QRY)

WFB_QRY - Pipelined Function

CREATE OR REPLACE TYPE bra_rec_type IS OBJECT (o_pos INTEGER, c_pos INTEGER, str VARCHAR2(4000));
/
CREATE TYPE bra_lis_type IS VARRAY(4000) OF bra_rec_type;
/
FUNCTION Parse_Brackets (p_str VARCHAR2) RETURN bra_lis_type PIPELINED IS
  c_n_ob       CONSTANT PLS_INTEGER := Length (p_str) - Length (Replace (p_str, '(', ''));
  l_ob_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  l_cb_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  TYPE b_rec_type   IS  RECORD (pos INTEGER, diff INTEGER);
  TYPE b_lis_type   IS  VARRAY(32767) OF b_rec_type;
  l_b_lis               b_lis_type := b_lis_type(NULL);
  l_bra_lis             bra_lis_type := bra_lis_type();
  n_b                   PLS_INTEGER := 0;
  n_ob                  PLS_INTEGER := 0;
  n_cb                  PLS_INTEGER := 0;
  l_chr                 VARCHAR2(1);
  l_o_diff              PLS_INTEGER;
BEGIN

  IF c_n_ob = 0 THEN
    RETURN;
  END IF;
  l_ob_lis.EXTEND (c_n_ob);
  l_bra_lis.EXTEND (c_n_ob);
  l_cb_lis.EXTEND (c_n_ob);
  l_b_lis.EXTEND (c_n_ob + c_n_ob);
 
  FOR i IN 1..Length (p_str) LOOP
 
    l_chr := Substr (p_str, i, 1);
    IF l_chr NOT IN ('(', ')') THEN CONTINUE; END IF;

    n_b := n_b + 1;
    l_b_lis(n_b).pos := i;
 
    IF l_chr = '(' THEN
      n_ob := n_ob + 1;
      l_ob_lis(n_ob) := n_b;
    ELSE
      n_cb := n_cb + 1;
      l_cb_lis(n_cb) := n_b;
    END IF;

    l_b_lis(n_b).diff := n_ob - n_cb;
 
  END LOOP;

  FOR i IN 1..n_ob LOOP

    l_o_diff := l_b_lis (l_ob_lis(i)).diff;
    FOR j IN 1..n_cb LOOP

      IF l_b_lis (l_cb_lis(j)).pos < l_b_lis (l_ob_lis(i)).pos THEN CONTINUE; END IF;
      IF l_o_diff = l_b_lis (l_cb_lis(j)).diff + 1 THEN

        PIPE ROW (bra_rec_type (l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos, Substr (p_str, l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos - l_b_lis(l_ob_lis(i)).pos + 1)));
        EXIT;

      END IF;

    END LOOP;
 
  END LOOP;

END Parse_Brackets;

Plan hash value: 3367347570

---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |  40000 |00:00:01.17 |      38 |       |       |          |
|   1 |  SORT ORDER BY                       |                 |      1 |    816K|  40000 |00:00:01.17 |      38 |    21M|  1702K|   19M (0)|
|   2 |   NESTED LOOPS OUTER                 |                 |      1 |    816K|  40000 |00:00:16.80 |      38 |       |       |          |
|   3 |    TABLE ACCESS FULL                 | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |       |       |          |
|   4 |    VIEW                              | VW_LAT_D4FD8C38 |    100 |   8168 |  40000 |00:00:01.13 |       0 |       |       |          |
|   5 |     COLLECTION ITERATOR PICKLER FETCH| PARSE_BRACKETS  |    100 |   8168 |  40000 |00:00:01.10 |       0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------

PFB_QRY - Query

SELECT  /*+ PFB_QRY gather_plan_statistics */
        b.id        id, 
        t.o_pos     o_pos, 
        t.c_pos     c_pos,
        t.str       str
  FROM bracket_strings b
  OUTER APPLY TABLE (Strings.Parse_Brackets (b.str)) t
 ORDER BY b.id, t.o_pos

Plan hash value: 3367347570

---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |  40000 |00:00:01.15 |      38 |       |       |          |
|   1 |  SORT ORDER BY                       |                 |      1 |    816K|  40000 |00:00:01.15 |      38 |    21M|  1702K|   19M (0)|
|   2 |   NESTED LOOPS OUTER                 |                 |      1 |    816K|  40000 |00:00:01.95 |      38 |       |       |          |
|   3 |    TABLE ACCESS FULL                 | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |       |       |          |
|   4 |    VIEW                              | VW_LAT_D4FD8C38 |    100 |   8168 |  40000 |00:00:01.51 |       0 |       |       |          |
|   5 |     COLLECTION ITERATOR PICKLER FETCH| PARSE_BRACKETS  |    100 |   8168 |  40000 |00:00:01.50 |       0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------

Notes on PFB_QRY

This uses PL/SQL pipelined database function which is called in the query.

Performance Testing Results

Key

If CPU time (y) is proportional to the varying dimension (x), for data points 1 and 2, we would have:

y = kx

and so, for any two data points 1 and 2:

y2.x1/(y1.x2) = 1

We can take the actual value of the linear ratio as a marker for linear proportionality or not.

If CPU time (y) is proportional to the varying dimension (x), for data points 1 and 2, we would have:

y = kx.x

and so, for any two data points 1 and 2:

y2.x1.x1/(y1.x2.x2) = 1

We can take the actual value of the quadratic ratio as a marker for quadratic proportionality or not.

  • LRTB = Ratio to best time at high data point
  • RTP_L = Linear ratio as defined above, averaged over successive data points
  • RTP_Q = Quadratic ratio as defined above, averaged over successive data points

Depth fixed, at small width point; width varies: d=100, w=(100, 200, 300, 400)

Notes on Results for Fixed Depth

  • CPU time increases with width for all queries at above a linear rate
  • CBL_QRY is significantly faster than MRB_QRY, which appears to be rising quadratically
  • Both WFB_QRY and PFB_QRY are much faster than the non-PL/SQL queries
  • PFB_QRY is slightly faster than WFB_QRY. This could be regarded as too small a difference to be significant, but is consistent across the data points, despite the context switching with database function calls

Width fixed at high depth point; depth varies: w=400, d=(0, 100, 200, 300, 400)

Notes on Results for Fixed Width

  • CPU time increases with depth for all queries although at a sublinear rate
  • CBL_QRY has a big jump in CPU time between 0 and 100, where nesting starts to come in, and was actually faster than CBL_QRY without nesting