A Note on Running Sums and Products in SQL

It’s a common requirement to compute running sums over time in SQL; for example, to find sales volumes to date. This is easy to do using analytic functions, like this, schematically:

SELECT SUM(sales) OVER (PARTITION By partition_key ORDER BY date)
  FROM ...

The ORDER BY clause implicitly adds a window range of UNBOUNDED PRECEDING, and if you omit it you get the overall totals by partition_key.

Recently I needed to compute the products of some numeric factors across records over time. This kind of requirement often arises in financial calculations, and is slightly more tricky since Oracle doesn’t have an analytic function for products as it does for sums. However, we can achieve the same functionality using the well known mathematical equivalence:

	log(xy) = log(x) + log(y)

and therefore:

	xy = exp(log(x) + log(y))

More generally, if we have a set of N records , and a function, f, defined on a record, we can write the running sum, for function f, and n in (1..N) as:


and the running product as:


Then from the above equivalence, we can write:

This means we can get the running products in SQL using the analytic SUM combined with the (natural) log (LN in Oracle SQL) and exp functions. Let’s see how it works using Oracle’s HR demo schema. We’ll take the employees table and use:

  • department_id as the partition key
  • employee_id as the dimension to order by
  • salary as the measure

To start with, let’s get the running and total sums, showing results for department_id = 60:

SELECT department_id, employee_id, salary,
       SUM(salary) OVER (PARTITION BY department_id ORDER BY employee_id) running_sum,
       SUM(salary) OVER (PARTITION BY department_id) total_sum
  FROM employees
 ORDER BY department_id, employee_id
/
Sums

DEPARTMENT_ID EMPLOYEE_ID     SALARY RUNNING_SUM  TOTAL_SUM
------------- ----------- ---------- ----------- ----------
           60         103       9000        9000      28800
           60         104       6000       15000      28800
           60         105       4800       19800      28800
           60         106       4800       24600      28800
           60         107       4200       28800      28800

Next, let’s use the above equivalence to get the running and total products of the expression (1 + salary/10000), which we’ll call mult:

SELECT department_id, employee_id, salary, (1 + salary/10000) mult,
       EXP(SUM(LN((1 + salary/10000))) OVER (PARTITION BY department_id ORDER BY employee_id)) running_prod,
       EXP(SUM(LN((1 + salary/10000))) OVER (PARTITION BY department_id)) total_prod
  FROM employees
 ORDER BY department_id, employee_id
/
Products

DEPARTMENT_ID EMPLOYEE_ID     SALARY       MULT RUNNING_PROD TOTAL_PROD
------------- ----------- ---------- ---------- ------------ ----------
           60         103       9000        1.9          1.9 9.45551872
           60         104       6000        1.6         3.04 9.45551872
           60         105       4800       1.48       4.4992 9.45551872
           60         106       4800       1.48     6.658816 9.45551872
           60         107       4200       1.42   9.45551872 9.45551872

If we didn’t have this technique we could compute the results using explicit recursion, either by MODEL clause, or by recursive subquery factors. Let’s do it those ways out of interest. First here’s a MODEL clause solution:

WITH multipliers AS (
SELECT department_id, employee_id, salary, (1 + salary/10000) mult, 
       COUNT(*) OVER (PARTITION BY department_id) n_emps
  FROM employees
)
SELECT department_id, employee_id, salary, mult, running_prod, total_prod
  FROM multipliers
 MODEL
 	PARTITION BY (department_id)
 	DIMENSION BY (Row_Number() OVER (PARTITION BY department_id ORDER BY employee_id) rn)
 	MEASURES (employee_id, salary, mult, mult running_prod, mult total_prod, n_emps)
 	RULES (
 		running_prod[rn > 1] = mult[CV()] * running_prod[CV() - 1],
 		total_prod[any] = running_prod[n_emps[CV()]]
 	)
 ORDER BY department_id, employee_id

Finally, here’s a solution using recursive subquery factors:

WITH multipliers AS (
SELECT department_id, employee_id, salary, (1 + salary/10000) mult, 
       Row_Number() OVER (PARTITION BY department_id ORDER BY employee_id) rn,
       COUNT(*) OVER (PARTITION BY department_id) n_emps
  FROM employees
 WHERE department_id = 60
), rsf (department_id, employee_id, rn, salary, mult, running_prod) AS (
	SELECT department_id, employee_id, rn, salary, mult, mult running_prod
	  FROM multipliers
	 WHERE rn = 1
	UNION ALL
	SELECT m.department_id, m.employee_id, m.rn, m.salary, m.mult, r.running_prod * m.mult
	  FROM rsf r
	  JOIN multipliers m
	    ON m.rn = r.rn + 1
	   AND m.department_id = r.department_id
)
SELECT department_id, employee_id, salary, mult, running_prod, 
       Last_Value(running_prod) OVER (PARTITION BY department_id) total_prod
  FROM rsf
 ORDER BY department_id, employee_id

You can see the scripts and full output on my new GitHub project,
Small SQL projects, in the sums_products folder.

You can get the full detail on using analytic functions from the Oracle doc:
SQL for Analysis and Reporting






Knapsacks and Networks in SQL

I opened a GitHub account, Brendan’s GitHub Page last year and have added a number of projects since then, in PL/SQL and other 3GL languages. Partly in response to a request for the code for one of my blog articles on an interesting SQL problem, I decided recently to create a new repo for the SQL behind a group of articles on solving difficult combinatorial optimisation problems via ‘advanced’ SQL techniques such as recursive subquery factoring and model clause, sql_demos – Brendan’s repo for interesting SQL. It includes installation scripts with object creation and data setup, and scripts to run the SQL on the included datasets. The idea is that anyone with the pre-requisites should be able to reproduce my results within a few minutes of downloading the repo.

[Left image from Knapsack problem; right image copied from Chapter 11 Dynamic Programming]

In this article I embed each of the earlier articles relevant to the GitHub repo with a brief preamble.

The first two articles are from January 2013 and use recursive subquery factoring to find exact solutions for the single and multiple knapsack problem, and also include PL/SQL solutions for comparison. They avoid the ‘brute force’ approach by truncating search paths as soon as limit constraints are exceeded. The cumulative paths are stored in string variables passed through the iterations (which would not be possible with the older Connect By hierarchical syntax).

In these articles I illustrate the nature of the problems using Visio diagrams, and include dimensional performance benchmarking results, using a technique that I presented on at last year’s Ireland OUG conference: Dimensional Performance Benchmarking of SQL – IOUG Presentation. I also illustrate the queries using my own method for diagramming SQL queries.

A Simple SQL Solution for the Knapsack Problem (SKP-1), January 2013

An SQL Solution for the Multiple Knapsack Problem (SKP-m), January 2013

The next article uses Model clause to find a more general solution to a problem posed on AskTom, as a ‘bin fitting’ problem. I also solved the problem by other methods including recursive subquery factoring. I illustrate the problem itself, as well as the Model iteration scheme using Visio diagrams, and again include dimensional performance benchmarking. The results show how quadratic performance variation can be turned into much faster linear variation by means of a temporary table in this kind of problem.

SQL for the Balanced Number Partitioning Problem, May 2013

This article arose from a question on OTN, and concerns a type of knapsack or bin-fitting problem that is quite tricky to solve in SQL, where the items fall into categories on which there are separate constraints. I introduced a new idea here, to filter out unpromising paths within recursive subquery factoring by means of analytic functions, in order to allow the technique to be used to generate solutions for larger problems without guaranteed optimality, but in shorter time. Two realistic datasets were used, one from the original poster, and another I got from a scraping website.

SQL for the Fantasy Football Knapsack Problem, June 2013

This article is on a classic ‘hard’ optimisation problem, and uses recursive subquery factoring with the same filtering technique as the previous article, and shows that it’s possible to solve a problem involving 312 American cities quite quickly in pure SQL using the approximation technique. It also uses a simple made-up example dataset to illustrate its working.

SQL for the Travelling Salesman Problem, July 2013

The following two articles concern finding shortest paths between given nodes in a network, and arose from a question on OTN. The first one again uses recursive subquery factoring with a filtering mechanism to exclude paths as early as possible, in a similar way to the approximative solutios methods in the earlier articles. In this case, however, reasoning about the nature of the problem shows that we are not in fact sacrificing optimality. The article has quite a lot of explanatory material on how the SQL works, and uses small dataset examples.

The second article considers how to improve performance further by obtaining a preliminary approximate solution that can be used as a bounding mechanism in a second step to find the exact solutions. This article uses two realistic networks as examples, including one having 428,156 links.

SQL for Shortest Path Problems, April 2015

SQL for Shortest Path Problems 2: A Branch and Bound Approach, May 2015

In the article above I cited results from a general network analysis package I had developed that obtains all the distinct connected subnetworks with their structures in an efficient manner using PL/SQL recursion. It is worth noting that for that kind of problem recursive SQL alone is very inefficient, and I wrote the following article to try to explain why that is so, and why the Connect By syntax is generally much worse than recursive subquery factoring.

Recursive SQL for Network Analysis, and Duality, September 2015

The PL/SQL package mentioned, which I think implements a ‘named’ algorithm although I didn’t know that when I wrote it (I don’t recall the name right now, sorry 🙁 ), is available on GitHub: Brendan’s network structural analysis Oracle package, with article:

PL/SQL Pipelined Function for Network Analysis, May 2015







deflated
inflated in-store for helium
Metallic rose gold colour scheme around for your balloon – along with our store for you free amazon.com take your helium at your local Card Factory store first to any room
From their second birthday to any celebration Find beautiful metallic shapes letters numbers and in rose gold colour scheme around for store for store for store first to their second birthday to get it blown up
More Information

Free Foil Helium Balloon Inflation In-Store
If you’ve bought a fun and create a chic rose gold colour scheme around for your order confirmation email – along
purchase we cannot provide a sealed packet so you please ring your next special occasion Gorgeous balloons in a foil helium at your confirmation email as proof of charge
Please remember to a great way to their second birthday to their second birthday to their second birthday to show this Card Factory store addresses and create a member of our store locator for any room
From their second birthday to ensure they can make it would be too large to mark a chic rose gold colour scheme around for any room
From their second birthday www.amazon.com mark a

Dimensional Benchmarking of String Splitting SQL

I noticed a question on AskTom last November concerning SQL for splitting delimited strings, Extract domain names from a column having multiple email addresses, a kind of question that arises frequently on the forums. There was some debate between reviewers Rajeshwaran Jeyabal, Stew Ashton and the AskTom guys on whether an XML-based solution performs better or worse than a more ‘classic’ solution based on the Substr and Instr functions and collections. AskTom’s Chris Saxon noted:

For me this just highlights the importance of testing in your own environment with your own data. Just because someone produced a benchmark showing X is faster, doesn’t mean it will be for you.

For me, relative performance is indeed frequently dependent on the size and ‘shape’ of the data used for testing. As I have my own ‘dimensional’ benchmarking framework, A Framework for Dimensional Benchmarking of SQL Performance, I was able to very quickly adapt Rajesh’s test data to benchmark across numbers of records and numbers of delimiters, and I put the results on the thread. I then decided to take the time to expand the scope to include other solutions, and to use more general data sets, where the token lengths vary as well as the number of tokens per record.

In fact the scope expanded quite a bit, as I found more and more ways to solve the problem, and I have only now found the time to write it up. Here is a list of all the queries considered:

Queries using Connect By for row generation

  • MUL_QRY – Cast/Multiset to correlate Connect By
  • LAT_QRY – v12 Lateral correlated Connect By
  • UNH_QRY – Uncorrelated Connect By unhinted
  • RGN_QRY – Uncorrelated Connect By with leading hint
  • GUI_QRY – Connect By in main query using sys_guid trick
  • RGX_QRY – Regular expression function, Regexp_Substr

Queries not using Connect By for row generation

  • XML_QRY – XMLTABLE
  • MOD_QRY – Model clause
  • PLF_QRY – database pipelined function
  • WFN_QRY – ‘WITH’ PL/SQL function directly in query
  • RSF_QRY – Recursive Subquery Factor
  • RMR_QRY – Match_Recognize

Test Problem

DELIMITED_LISTS Table

CREATE TABLE delimited_lists(id INT, list_col VARCHAR2(4000))
/

Functional Test Data

The test data consist of pipe-delimited tokens (‘|’) in a VARCHAR2(4000) column in a table with a separate integer unique identifier. For functional testing we will add a single ‘normal’ record with two tokens, plus four more records designed to validate null-token edge cases as follows:

  1. Normal case, two tokens
  2. Leading null token, then two not null tokens
  3. Trailing null token, after two not null tokens
  4. Two not null tokens, with a null token in the middle
  5. Two null tokens only
     ID LIST_COL
------- ------------------------------------------------------------
      1 token_11|token_12
      2 |token_21|token_22
      3 token_31|token_32|
      4 token_41||token_42
      5 |

Functional Test Results

     ID TOKEN
------- ----------
      1 token_11
      1 token_12
      2
      2 token_21
      2 token_22
      3 token_31
      3 token_32
      3
      4 token_41
      4
      4 token_42
      5
      5

13 rows selected.

All queries returned the expected results above, except that the XML query returned 12 rows with only a single null token returned for record 5. In the performance testing, no null tokens were included, and all queries returned the same results.

Performance Test Data

Each test set consisted of 3,000 records with the list_col column containing the delimited string dependent on width (w) and depth (d) parameters, as follows:

  • Each record contains w tokens
  • Each token contains d characters from the sequence 1234567890 repeated as necessary

The output from the test queries therefore consists of 3,000*w records with a unique identifier and a token of length d. For performance testing purposes the benchmarking framework writes the results to a file in csv format, while counting only the query steps in the query timing results.

In Oracle upto version 11.2 VARCHAR2 expressions cannot be longer than 4,000 characters, so I decided to run the framework for four sets of parameters, as follows:

  • Depth fixed, high; width range low: d=18, w=(50,100,150,200)
  • Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)
  • Width fixed, low; depth range high: w=5, d=(195,390,585,780)
  • Width fixed, high; depth range low: w=150, d=(6,12,18,24)

All the queries showed strong time correlation with width, while a few also showed strong correlation with depth.

Queries

All execution plans are from the data point with Width=1800, Depth=1, which has the largest number of tokens per record.

Multiset Query (MUL_QRY)

SELECT d.id   id,
       Substr (d.list_col, Instr ('|' || d.list_col, '|', 1, t.COLUMN_VALUE), Instr (d.list_col || '|', '|', 1, t.COLUMN_VALUE) - Instr ('|' || d.list_col, '|', 1, t.COLUMN_VALUE)) token
  FROM delimited_lists d, 
       TABLE (CAST (MULTISET (SELECT LEVEL FROM DUAL CONNECT BY LEVEL <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1) AS SYS.ODCINumberlist)) t

Plan hash value: 462687286

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |   5400K|00:01:52.86 |    3009 |       |       |          |
|   1 |  NESTED LOOPS                       |                 |      1 |     24M|   5400K|00:01:52.86 |    3009 |       |       |          |
|   2 |   TABLE ACCESS FULL                 | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |       |       |          |
|   3 |   COLLECTION ITERATOR SUBQUERY FETCH|                 |   3000 |   8168 |   5400K|00:01:49.96 |       0 |       |       |          |
|   4 |    CONNECT BY WITHOUT FILTERING     |                 |   3000 |        |   5400K|00:01:48.83 |       0 |  2048 |  2048 | 2048  (0)|
|   5 |     FAST DUAL                       |                 |   3000 |      1 |   3000 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------

Notes on MUL_QRY

This is the ‘classic’ CONNECT BY solution referred to above, which appears frequently on AskTom and elsewhere, and I copied the version used by Jayesh. The somewhat convoluted casting between subquery and array and back to SQL record via multiset allows the prior table in the from list to be referenced within the inline view, which is otherwise not permitted in versions earlier than 12.1, where the LATERAL keyword was introduced.

Despite this query being regarded as the ‘classic’ CONNECT BY solution to string-splitting, we will find that it is inferior in performance to a query I wrote myself across all data points considered. The new query is also simpler, but is not the most efficient of all methods, as we see later.

Lateral Query (LAT_QRY)

SELECT d.id                id,
       l.subs              token
FROM delimited_lists d
CROSS APPLY (
  SELECT Substr (d.list_col, pos + 1, Lead (pos, 1, 4000) OVER (ORDER BY pos) - pos - 1) subs, pos
    FROM (
    SELECT Instr (d.list_col, '|', 1, LEVEL) pos
      FROM DUAL
    CONNECT BY
      LEVEL <= Length (d.list_col) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1
    )
) l

Plan hash value: 631504984

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                 |      1 |        |   5400K|00:15:41.97 |    3009 |       |       |          |
|   1 |  NESTED LOOPS                    |                 |      1 |   3000 |   5400K|00:15:41.97 |    3009 |       |       |          |
|   2 |   TABLE ACCESS FULL              | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |       |       |          |
|   3 |   VIEW                           | VW_LAT_2D0B8FC8 |   3000 |      1 |   5400K|00:02:02.67 |       0 |       |       |          |
|   4 |    WINDOW SORT                   |                 |   3000 |      1 |   5400K|00:02:00.59 |       0 | 43008 | 43008 |38912  (0)|
|   5 |     VIEW                         |                 |   3000 |      1 |   5400K|00:01:58.78 |       0 |       |       |          |
|   6 |      CONNECT BY WITHOUT FILTERING|                 |   3000 |        |   5400K|00:01:48.53 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |       FAST DUAL                  |                 |   3000 |      1 |   3000 |00:00:00.01 |       0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------

Notes on LAT_QRY

This query is taken from Splitting Strings: Proof!, and uses a v12.1 new feature, described with examples in LATERAL Inline Views. The new feature allows you to correlate an inline view directly without the convoluted Multiset code, and can also be used with the keywords CROSS APPLY instead of LATERAL. It’s sometimes regarded as having peformance advantages, but in this context we will see that avoiding this correlation altogether is best for performance.

Row-generator Query, Unhinted and Hinted (UNH_QRY and RGN_QRY)

Unhinted Query

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL <= 
            (SELECT Max (Nvl (Length(list_col), 0) - Nvl (Length (Replace (list_col,'|')), 0) + 1)
               FROM delimited_lists)
)
SELECT d.id   id,
       Substr (d.list_col, Instr ('|' || d.list_col, '|', 1, r.rn), Instr (d.list_col || '|', '|', 1, r.rn) - Instr ('|' || d.list_col, '|', 1, r.rn)) token
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col,'|')), 0) + 1

Plan hash value: 747926158

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |      1 |        |   5400K|00:01:55.35 |    2717K|       |       |          |
|   1 |  NESTED LOOPS                  |                 |      1 |    150 |   5400K|00:01:55.35 |    2717K|       |       |          |
|   2 |   VIEW                         |                 |      1 |      1 |   1800 |00:00:07.39 |    1509 |       |       |          |
|   3 |    CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1800 |00:00:07.39 |    1509 |  2048 |  2048 | 2048  (0)|
|   4 |     FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   5 |     SORT AGGREGATE             |                 |      1 |      1 |      1 |00:00:00.06 |    1509 |       |       |          |
|   6 |      TABLE ACCESS FULL         | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  7 |   TABLE ACCESS FULL            | DELIMITED_LISTS |   1800 |    150 |   5400K|00:01:53.61 |    2716K|       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------

Execution Plan with Hint /*+ leading (d) */

Plan hash value: 1241630378

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                 |      1 |        |   5400K|00:00:02.37 |    3018 |       |       |          |
|   1 |  MERGE JOIN                     |                 |      1 |    150 |   5400K|00:00:02.37 |    3018 |       |       |          |
|   2 |   SORT JOIN                     |                 |      1 |   3000 |   3000 |00:00:00.07 |    1509 |    11M|  1318K|   10M (0)|
|   3 |    TABLE ACCESS FULL            | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  4 |   SORT JOIN                     |                 |   3000 |      1 |   5400K|00:00:01.42 |    1509 |   160K|   160K|  142K (0)|
|   5 |    VIEW                         |                 |      1 |      1 |   1800 |00:00:07.37 |    1509 |       |       |          |
|   6 |     CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1800 |00:00:07.37 |    1509 |  2048 |  2048 | 2048  (0)|
|   7 |      FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   8 |      SORT AGGREGATE             |                 |      1 |      1 |      1 |00:00:00.06 |    1509 |       |       |          |
|   9 |       TABLE ACCESS FULL         | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)
       filter(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)

Notes on UNH_QRY and RGN_QRY

I wrote the UNH_QRY query in an attempt to avoid the convoluted Multiset approach of the ‘classic’ solution. The reason for the use of arrays and Multiset seems to be that, while we need to ‘generate’ multiple rows for each source row, the number of rows generated has to vary by source record and so the row-generating inline view computes the number of tokens for each record in its where clause.

The use of row-generating subqueries is quite common, but in other cases one often has a fixed number of rows to generate, as in data densification scenarios for example. It occurred to me that, although we don’t know the number to generate, we do have an upper bound, dependent on the maximum number of characters, and we could generate that many in a subquery, then join only as many as are required to the source record.

This approach resulted in a simpler and more straightforward query, but it turned out in its initial form to be very slow. The execution plan above shows that the row generator is driving a nested loops join within which a full scan is performed on the table. The CBO is not designed to optimise this type of algorithmic query, so I added a leading hint to reverse the join order, and this resulted in much better performance. In fact, as we see later the hinted query outperforms the other CONNECT BY queries, including the v12.1 LAT_QRY query at all data points considered.

sys_guid Query (GUI_QRY)

WITH guid_cby AS (
  SELECT id, level rn, list_col,Instr ('|' || d.list_col, '|', 1, LEVEL) pos
    FROM delimited_lists d
  CONNECT BY prior id = id and prior sys_guid() is not null and
    LEVEL <= Length (d.list_col) - Nvl (Length (Replace (d.list_col, '|')), 0) + 1
)
SELECT id  id, 
       Substr (list_col, pos, Lead (pos, 1, 4000) OVER (partition by id ORDER BY pos) - pos - 1) token
  FROM guid_cby

Plan hash value: 240527573

-------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                 |      1 |        |   5400K|00:14:12.07 |   77412 |   2404K|   2404K|       |       |          |         |
|   1 |  WINDOW SORT                   |                 |      1 |   3000 |   5400K|00:14:12.07 |   77412 |   2404K|   2404K|    20G|    45M|  163M (0)|      18M|
|   2 |   VIEW                         |                 |      1 |   3000 |   5400K|00:04:07.47 |    1509 |      0 |      0 |       |       |          |         |
|*  3 |    CONNECT BY WITHOUT FILTERING|                 |      1 |        |   5400K|00:03:55.99 |    1509 |      0 |      0 |    12M|  1343K|   10M (0)|         |
|   4 |     TABLE ACCESS FULL          | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |      0 |      0 |       |       |          |         |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("ID"=PRIOR NULL)

Notes on GUI_QRY

This query also generates rows using CONNECT BY, but differs from the others shown by integrating the row-generation code with the main rowset and avoiding a separate subquery against DUAL. This seems to be a more recent approach than the traditional Multiset solution. It uses a trick involving the system function sys_guid() to avoid the ‘connect by cycle’ error that you would otherwise get, as explained in this OTN thread: Reg : sys_guid()
.

Unfortunately, and despite its current popularity on OTN, it turns out to be even less efficient than the earlier approaches, by quite a margin.

Regex Query (RGX_QRY)

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL < 2000
)
SELECT d.id   id,
       RTrim (Regexp_Substr (d.list_col || '|', '(.*?)\|', 1, r.rn), '|') token
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) - Nvl (Length (Replace (d.list_col,'|')), 0) + 1

Plan hash value: 1537360357

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                 |      1 |        |   5400K|00:00:03.35 |    1509 |       |       |          |
|   1 |  MERGE JOIN                     |                 |      1 |    150 |   5400K|00:00:03.35 |    1509 |       |       |          |
|   2 |   SORT JOIN                     |                 |      1 |   3000 |   3000 |00:00:00.07 |    1509 |    11M|  1318K|   10M (0)|
|   3 |    TABLE ACCESS FULL            | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|*  4 |   SORT JOIN                     |                 |   3000 |      1 |   5400K|00:00:01.75 |       0 |   160K|   160K|  142K (0)|
|   5 |    VIEW                         |                 |      1 |      1 |   1999 |00:00:00.01 |       0 |       |       |          |
|   6 |     CONNECT BY WITHOUT FILTERING|                 |      1 |        |   1999 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |      FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)
       filter(INTERNAL_FUNCTION("R"."RN")<=NVL(LENGTH("D"."LIST_COL"),0)-NVL(LENGTH(REPLACE("D"."LIST_COL",'|')),0)+1)

Notes on RGX_QRY

This query also generates rows using CONNECT BY, but differs from the others shown by using regular expressions to do the token parsing, which is simpler than the Substr/Instr approaches.

This seems to be quite popular, but it’s well known that regular expression processing can be bad for performance, and so it proves here, with CPU time increasing quadratically with number of tokens.

XML Query (XML_QRY)

SELECT id   id,
       x2   token
  FROM delimited_lists, XMLTable(
    'if (contains($X2,"|")) then ora:tokenize($X2,"\|") else $X2'
  PASSING list_col AS x2
  COLUMNS x2 VARCHAR2(4000) PATH '.'
)

Plan hash value: 2423482301

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |      1 |        |   5400K|00:00:10.85 |    3009 |
|   1 |  NESTED LOOPS                      |                       |      1 |     24M|   5400K|00:00:10.85 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS       |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |   3000 |   8168 |   5400K|00:00:03.49 |       0 |
----------------------------------------------------------------------------------------------------------------------

Notes on XML_QRY

This query using XMLTable is copied from the version used by Jayesh in the AskTom thread above.

Model Query (MOD_QRY)

SELECT id      id,
       token   token
  FROM delimited_lists
 MODEL
    PARTITION BY (id)
    DIMENSION BY (1 rn)
    MEASURES (CAST('' AS VARCHAR2(4000)) AS token, '|' || list_col || '|' list_col, 2 pos, 0 nxtpos, Length(list_col) + 2 len)
    RULES ITERATE (2000) UNTIL pos[1] > len[1] (
       nxtpos[1] = Instr (list_col[1], '|', pos[1], 1),
       token[iteration_number+1] = Substr (list_col[1], pos[1], nxtpos[1] - pos[1]),
       pos[1] = nxtpos[1] + 1
    )

Plan hash value: 1656081500

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                 |      1 |        |   5400K|03:10:43.97 |    1509 |   2883 |   2883 |       |       |          |
|   1 |  SQL MODEL ORDERED FAST|                 |      1 |   3000 |   5400K|03:10:43.97 |    1509 |   2883 |   2883 |  2047M|   112M| 2844M (1)|
|   2 |   TABLE ACCESS FULL    | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |      0 |      0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Notes on MOD_QRY

I wrote this query using the Model clause, available since Oracle Database version 10, for this article.

The Model clause has something of a reputation for poor performance, and this was one of the slower methods, with CPU time increasing quadratically with number of tokens.

Pipelined Function Query (PLF_QRY)

Pipelined Function Strings.Split

FUNCTION Split (p_string VARCHAR2, p_delim VARCHAR2) RETURN L1_chr_db_arr PIPELINED IS

  c_delim_len   CONSTANT SIMPLE_INTEGER := Length(p_delim);
  l_token_start          SIMPLE_INTEGER := 1;
  l_next_delim           SIMPLE_INTEGER := Instr (p_string, p_delim, l_token_start, 1);

BEGIN

  WHILE l_next_delim > 0 LOOP
    PIPE ROW (Substr (p_string, l_token_start, l_next_delim - l_token_start));
    l_token_start := l_next_delim + c_delim_len;
    l_next_delim := Instr (p_string || p_delim, p_delim, l_token_start, 1);
  END LOOP;

END Split;

Query Using Pipelined Function

SELECT d.id                id,
       s.COLUMN_VALUE      token
  FROM delimited_lists d
 CROSS JOIN TABLE (Strings.Split(d.list_col, '|')) s
Plan hash value: 2608399241

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:03.08 |    3009 |
|   1 |  NESTED LOOPS                      |                 |      1 |     24M|   5400K|00:00:03.08 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| SPLIT           |   3000 |   8168 |   5400K|00:00:01.87 |       0 |
----------------------------------------------------------------------------------------------------------------

Notes on PLF_QRY

This is a fairly well known approach to the problem that involves doing the string splitting within a pipelined database function that is passed the delimited string as a parameter. I wrote my own version for this article, taking care to make only one call to each of the oracle functions Instr and Substr within a loop over the tokens.

The results confirm that it is in fact the fastest approach over all data points considered, and CPU time increased approximately linearly with number of tokens.

With Function v12.1 Query (WFN_QRY)

WITH FUNCTION Split (p_string VARCHAR2, p_delim VARCHAR2) RETURN L1_chr_db_arr IS
  c_delim_len   CONSTANT SIMPLE_INTEGER := Length(p_delim);
  l_token_start          SIMPLE_INTEGER := 1;
  l_next_delim           SIMPLE_INTEGER := Instr (p_string, p_delim, l_token_start, 1);
  l_ret_arr              L1_chr_db_arr := L1_chr_db_arr();

BEGIN

  WHILE l_next_delim > 0 LOOP
    l_ret_arr.EXTEND;
    l_ret_arr(l_ret_arr.COUNT) := Substr (p_string, l_token_start, l_next_delim - l_token_start);
    l_token_start := l_next_delim + c_delim_len;
    l_next_delim := Instr (p_string || p_delim, p_delim, l_token_start, 1);
  END LOOP;
  RETURN l_ret_arr;

END Split;
SELECT d.id                id,
       s.COLUMN_VALUE      token
  FROM delimited_lists d
 CROSS JOIN TABLE (Split(d.list_col, '|')) s

Plan hash value: 2608399241

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:17.56 |    3009 |
|   1 |  NESTED LOOPS                      |                 |      1 |     24M|   5400K|00:00:17.56 |    3009 |
|   2 |   TABLE ACCESS FULL                | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    3009 |
|   3 |   COLLECTION ITERATOR PICKLER FETCH| SPLIT           |   3000 |   8168 |   5400K|00:00:02.57 |       0 |
----------------------------------------------------------------------------------------------------------------

Notes on WFN_QRY

Oracle introduced the ability to include a PL/SQL function definition directly in a query in version 12.1. I converted my pipelined function into a function within a query, returning an array of character strings.

As we would expect from the results of the similar pipelined function approach, this also turns out to be a very efficient solution. However, it may be surprising to many that it is significantly slower (20-30%) than using the separate database function, given the prominence that is usually assigned to context-switching.

Recursive Subquery Factor Query (RSF_QRY)

WITH rsf (id, token, nxtpos, nxtpos2, list_col, len, iter) AS
(
SELECT id,
       Substr (list_col, 1, Instr (list_col || '|', '|', 1, 1) - 1),
       Instr (list_col || '|', '|', 1, 1) + 1,
       Instr (list_col || '|', '|', 1, 2),
       list_col || '|',
       Length (list_col) + 1,
       1
  FROM delimited_lists
UNION ALL
SELECT id,
       Substr (list_col, nxtpos, nxtpos2 - nxtpos),
       nxtpos2 + 1,
       Instr (list_col, '|', nxtpos2 + 1, 1),
       list_col,
       len,
       iter + 1
  FROM rsf r
 WHERE nxtpos <= len
)
SELECT
       id       id,
       token    token
  FROM rsf

Plan hash value: 2159872273

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |   5400K|03:27:56.79 |    1434M|       |       |          |
|   1 |  VIEW                                     |                 |      1 |   6000 |   5400K|03:27:56.79 |    1434M|       |       |          |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |   5400K|03:27:45.02 |    1434M|    12M|  1343K|   10M (0)|
|   3 |    TABLE ACCESS FULL                      | DELIMITED_LISTS |      1 |   3000 |   3000 |00:00:00.01 |    1509 |       |       |          |
|   4 |    RECURSIVE WITH PUMP                    |                 |   1800 |        |   5397K|00:00:04.48 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

Notes on RSF_QRY

Oracle introduced recursive subquery factoring in v11.2, as an Ansi standard approach to SQL recursion, and with greater power than the older CONNECT BY recursion. I wrote the query for this article.

The query turned out to be surprisingly simple in structure, but for large numbers of tokens it was by far the slowest, and CPU time increased quadratically with number of tokens.

Match Recognize Query (RMR_QRY)

WITH row_gen AS (
        SELECT LEVEL rn FROM DUAL CONNECT BY LEVEL <= 4000
), char_streams AS (
SELECT d.id, r.rn, Substr (d.list_col || '|', r.rn, 1) chr
  FROM delimited_lists d
  JOIN row_gen r
    ON r.rn <= Nvl (Length(d.list_col), 0) + 2
), chars_grouped AS (
SELECT *
  FROM char_streams
 MATCH_RECOGNIZE (
   PARTITION BY id
   ORDER BY rn
   MEASURES chr mchr,
            FINAL COUNT(*) n_chrs,
            MATCH_NUMBER() mno
      ALL ROWS PER MATCH
  PATTERN (c*? d)
   DEFINE d AS d.chr = '|'
  ) m
)
SELECT id   id, 
       RTrim (Listagg (chr, '') WITHIN GROUP (ORDER BY rn), '|') token
  FROM chars_grouped
GROUP BY id, mno

Plan hash value: 2782416907

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                 |      1 |        |   5400K|00:00:52.53 |    6036K|    103K|    103K|       |       |          |         |
|   1 |  SORT GROUP BY                     |                 |      1 |    150 |   5400K|00:00:52.53 |    6036K|    103K|    103K|   330M|  6949K|   97M (1)|     395K|
|   2 |   VIEW                             |                 |      1 |    150 |     10M|00:00:53.55 |    6036K|  52539 |  52539 |       |       |          |         |
|   3 |    MATCH RECOGNIZE SORT            |                 |      1 |    150 |     10M|00:00:51.47 |    6036K|  52539 |  52539 |   231M|  5084K|  163M (1)|         |
|   4 |     VIEW                           |                 |      1 |    150 |     10M|00:00:18.75 |    6036K|      0 |      0 |       |       |          |         |
|   5 |      NESTED LOOPS                  |                 |      1 |    150 |     10M|00:00:11.76 |    6036K|      0 |      0 |       |       |          |         |
|   6 |       VIEW                         |                 |      1 |      1 |   4000 |00:00:00.01 |       0 |      0 |      0 |       |       |          |         |
|   7 |        CONNECT BY WITHOUT FILTERING|                 |      1 |        |   4000 |00:00:00.01 |       0 |      0 |      0 |  2048 |  2048 | 2048  (0)|         |
|   8 |         FAST DUAL                  |                 |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |      0 |       |       |          |         |
|*  9 |       TABLE ACCESS FULL            | DELIMITED_LISTS |   4000 |    150 |     10M|00:00:10.12 |    6036K|      0 |      0 |       |       |          |         |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   9 - filter("R"."RN"<=NVL(LENGTH("D"."LIST_COL"),0)+2)

Notes on RMR_QRY

Oracle introduced Match_Recognize in v12.1, as a mechanism for pattern matching along the lines of regular expressions for strings, but for matching patterns across records. I wrote the query for this article, converting each character in the input strings into a separate record to allow for its use.

This approach might seems somewhat convoluted, and one might expect it to be correspondingly slow. As it turns out though, for most datasets it is faster than many of the other methods, the ones with very long tokens being the exception, and CPU time increased linearly with both number of tokens and number of characters per token. It is notable that, apart from the exception mentioned, it outperformed the regular expression query.

Performance Results

In the tables below, we will include some expressions used in Dimensional Benchmarking of Bracket Parsing SQL:

  • LRTB = Ratio to best time at high data point
  • RTP_L = Linear ratio as defined in the link above, averaged over successive data points
  • RTP_Q = Quadratic ratio as defined in the link above, averaged over successive data points

The CPU times are listed but elapsed times are much the same. Each table has columns in order of increasing last CPU time by query.

Depth fixed, high; width range low: d=18, w=(50,100,150,200)

Depth fixed, low; width range high: d=1, w=(450,900,1350,1800)

Width fixed, low; depth range high: w=5, d=(195,390,585,780)


Width fixed, high; depth range low: w=150, d=(6,12,18,24)

A Note on the Row Generation by Connect By Results

It is interesting to observe that the ‘classical’ mechanism for row-generation in string-splitting and similar scenarios turns out to be much slower than a simpler approach that removes the correlation of the row-generating subquery. This ‘classical’ mechanism has been proposed on Oracle forums over many years, while a simpler and faster approach seems to have gone undiscovered. The reason for its performance deficit is simply that running a Connect By query for every master row is unsurprisingly inefficient. The Use of the v12.1 LATERAL correlation syntax simplifies but doesn’t improve performance by much.

The more recent approach to Connect By row-generation is to use the sys_guid ‘trick’ to embed the Connect By in the main query rather than in a correlated subquery, and this has become very popular on forums such as OTN. As we have seen, although simpler, this is even worse for performance: Turning the whole query into a tree-walk isn’t good for performance either. It’s better to isolate the tree-walk, execute it once, and then just join its result set as in RGN_QRY.

Conclusions

  • The database pipelined function solution (PLF_QRY) is generally the fastest across all data points
  • Using the v12.1 feature of a PL/SQL function embedded within the query is almost always next best, although slower by up to about a third; its being slower than a database function may surprise some
  • Times generally increased uniformly with numbers of tokens, usually either linearly or quadratically
  • Times did not seem to increase so uniformly with token size, except for XML (XML_QRY), Match_Recognize (RMR_QRY) and regular expression (RGX_QRY)
  • For larger numbers of tokens, three methods all showed quadratic variation and were very inefficient: Model (MOD_QRY), regular expression (RGX_QRY), and recursive subquery factors (RSF_QRY)
  • We have highlighted two inefficient but widespread approaches to row-generation by Connect By SQL, and pointed out a better method

These conclusions are based on the string-splitting problem considered, but no doubt would apply to other scenarios involving requirements to split rows into multiple rows based on some form of string-parsing.

Database VersionOracle Database 12c 12.1.0.2.0
OutputBatch_Str
GitHubA Framework for Dimensional Benchmarking of SQL Query Performance
Overview ArticleA Framework for Dimensional Benchmarking of SQL Performance