# SQL for the Balanced Number Partitioning Problem

I noticed a post on AskTom recently that referred to an SQL solution to a version of the so-called Bin Fitting problem, where even distribution is the aim. The solution, How do I solve a Bin Fitting problem in an SQL statement?, uses Oracle’s Model clause, and, as the poster of the link observed, has the drawback that the number of bins is embedded in the query structure. I thought it might be interesting to find solutions without that drawback, so that the number of bins could be passed to the query as a bind variable. I came up with three solutions using different techniques, starting here.

An interesting article in American Scientist, The Easiest Hard Problem, notes that the problem is NP-complete, or certifiably hard, but that simple greedy heuristics often produce a good solution, including one used by schoolboys to pick football teams. The article uses the more descriptive term for the problem of balanced number partitioning, and notes some practical applications. The Model clause solution implements a multiple-bin version of the main Greedy Algorithm, while my non-Model SQL solutions implement variants of it that allow other techniques to be used, one of which is very simple and fast: this implements the team picking heuristic for multiple teams.

Another poster, Stew Ashton, suggested a simple change to my Model solution that improved performance, and I use this modified version here. He also suggested that using PL/SQL might be faster, and I have added my own simple PL/SQL implementation of the Greedy Algorithm, as well as a second version of the recursive subquery factoring solution that performs better than the first.

This article explains the solutions, considers two simple examples to illustrate them, and reports on performance testing across dimensions of number of items and number of bins. These show that the solutions exhibit either linear or quadratic variation in execution time with number of items, and some methods are sensitive to the number of bins while others are not.

After I had posted my solutions on the AskTom thread, I came across a thread on OTN, need help to resolve this issue, that requested a solution to a form of bin fitting problem where the bins have fixed capacity and the number of bins required must be determined. I realised that my solutions could easily be extended to add that feature, and posted extended versions of two of the solutions there. I have added a section here for this.

Updated, 5 June 2013: added Model and RSF diagrams

Update, 18 November 2017: I have now put scripts for setting up data and running the queries in a new schema onto my GitHub project: Brendan’s repo for interesting SQL. Note that I have not included the minor changes needed for the extended problem where finding the number of bins is part of the problem.

Greedy Algorithm Variants

Say there are N bins and M items.

Greedy Algorithm (GDY)
Set bin sizes zero
Loop over items in descending order of size

• Add item to current smallest bin
• Calculate new bin size

End Loop

Greedy Algorithm with Batched Rebalancing (GBR)
Set bin sizes zero
Loop over items in descending order of size in batches of N items

• Assign batch to N bins, with bins in ascending order of size
• Calculate new bin sizes

End Loop

Greedy Algorithm with No Rebalancing – or, Team Picking Algorithm (TPA)
Assign items to bins cyclically by bin sequence in descending order of item size

Two Examples

Example: Four Items Here we see that the Greedy Algorithm finds the perfect solution, with no difference in bin size, but the two variants have a difference of two.

Example: Six Items Here we see that none of the algorithms finds the perfect solution. Both the standard Greedy Algorithm and its batched variant give a difference of two, while the variant without rebalancing gives a difference of four.

SQL Solutions

Original Model for GDY
See the link above for the SQL for the problem with three bins only.

The author has two measures for each bin and implements the GDY algorithm using CASE expressions and aggregation within the rules. The idea is to iterate over the items in descending order of size, setting the item bin to the bin with current smallest value. I use the word ‘bin’ for his ‘bucket’. Some notes:

• Dimension by row number, ordered by item value
• Add measures for the iteration, it, and number of iterations required, counter
• Add measures for the bin name, bucket_name, and current minimum bin value, min_tmp (only first entry used)
• Add measures for each item bin value, bucket_1-3, being the item value if it’s in that bin, else zero
• Add measures for each bin running sum, pbucket_1-3, being the current value of each bin (only first two entries used)
• The current minimum bin value, bin_tmp is computed as the least of the running sums
• The current item bin value is set to the item value for the bin whose value matches the minimum just computed, and null for the others
• The current bin name is set similarly to be the bin matching the minimum
• The new running sums are computed for each bin

Brendan’s Generic Model for GDY

```SELECT item_name, bin, item_value, Max (bin_value) OVER (PARTITION BY bin) bin_value
FROM (
SELECT * FROM items
MODEL
DIMENSION BY (Row_Number() OVER (ORDER BY item_value DESC) rn)
MEASURES (item_name,
item_value,
Row_Number() OVER (ORDER BY item_value DESC) bin,
item_value bin_value,
Row_Number() OVER (ORDER BY item_value DESC) rn_m,
0 min_bin,
Count(*) OVER () - :N_BINS - 1 n_iters
)
RULES ITERATE(100000) UNTIL (ITERATION_NUMBER >= n_iters) (
min_bin = Min(rn_m) KEEP (DENSE_RANK FIRST ORDER BY bin_value)[rn <= :N_BINS],
bin[ITERATION_NUMBER + :N_BINS + 1] = min_bin,
bin_value[min_bin] = bin_value[CV()] + Nvl (item_value[ITERATION_NUMBER + :N_BINS + 1], 0)
)
)
WHERE item_name IS NOT NULL
ORDER BY item_value DESC
```

My Model solution works for any number of bins, passing the number of bins as a bind variable. The key idea here is to use values in the first N rows of a generic bin value measure to store all the running bin values, rather than as individual measures. I have included two modifications suggested by Stew in the AskTom thread.

• Dimension by row number, ordered by item value
• Initialise a bin measure to the row number (the first N items will remain fixed)
• Initialise a bin value measure to item value (only first N entries used)
• Add the row number as a measure, rn_m, in addition to a dimension, for referencing purposes
• Add a min_bin measure for current minimum bin index (first entry only)
• Add a measure for the number of iterations required, n_iters
• The first N items are correctly binned in the measure initialisation
• Set the minimum bin index using analytic Min function with KEEP clause over the first N rows of bin value
• Set the bin for the current item to this index
• Update the bin value for the corresponding bin only Recursive Subquery Factor for GBR

```WITH bins AS (
SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), items_desc AS (
SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
FROM items
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
i.item_name,
i.item_value,
i.item_value,
1,
b.n_bins - i.rn + 1,
b.n_bins
FROM bins b
JOIN items_desc i
ON i.rn = b.bin
UNION ALL
SELECT r.bin,
i.item_name,
i.item_value,
r.bin_value + i.item_value,
r.lev + 1,
Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
r.n_bins
FROM rsf r
JOIN items_desc i
ON i.rn = r.bin_rank + r.lev * r.n_bins
)
SELECT r.item_name,
r.bin, r.item_value, r.bin_value
FROM rsf r
ORDER BY item_value DESC```

The idea here is to use recursive subquery factors to iterate through the items in batches of N items, assigning each item to a bin according to the rank of the bin on the previous iteration.

• Initial subquery factors form record sets for the bins and for the items with their ranks in descending order of value
• The anchor branch assign bins to the first N items, assigning the item values to a bin value field, and setting the bin rank in ascending order of this bin value
• The recursive branch joins the batch of items to the record in the previous batch whose bin rank matches that of the item in the reverse sense (so largest item goes to smallest bin etc.)
• The analytic Row_Number function computes the updated bin ranks, and the bin values are updated by simple addition Recursive Subquery Factor for GBR with Temporary Table
Create Table and Index

```DROP TABLE items_desc_temp
/
CREATE GLOBAL TEMPORARY TABLE items_desc_temp (
item_name  VARCHAR2(30) NOT NULL,
item_value NUMBER(8) NOT NULL,
rn         NUMBER
)
ON COMMIT DELETE ROWS
/
CREATE INDEX items_desc_temp_N1 ON items_desc_temp (rn)
/```

Insert into Temporary Table

```INSERT INTO items_desc_temp
SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
FROM items;```

RSF Query with Temporary Table

```WITH bins AS (
SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
i.item_name,
i.item_value,
i.item_value,
1,
b.n_bins - i.rn + 1,
b.n_bins
FROM bins b
JOIN items_desc_temp i
ON i.rn = b.bin
UNION ALL
SELECT r.bin,
i.item_name,
i.item_value,
r.bin_value + i.item_value,
r.lev + 1,
Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
r.n_bins
FROM rsf r
JOIN items_desc_temp i
ON i.rn = r.bin_rank + r.lev * r.n_bins
)
SELECT item_name, bin, item_value, bin_value
FROM rsf
ORDER BY item_value DESC```

The idea here is that in the initial RSF query a subquery factor of items was joined on a calculated field, so the whole record set had to be read, and performance could be improved by putting that initial record set into an indexed temporary table ahead of the main query. We'll see in the performance testing section that this changes quadratic variation with problem size into linear variation.

Plain Old SQL Solution for TPA

```WITH items_desc AS (
SELECT item_name, item_value,
Mod (Row_Number () OVER (ORDER BY item_value DESC), :N_BINS) + 1 bin
FROM items
)
SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_total
FROM items_desc
ORDER BY item_value DESC
```

The idea here is that the TPA algorithm can be implemented in simple SQL using analyic functions.

• The subquery factor assigns the bins by taking the item rank in descending order of value and applying the modulo (N) function
• The main query returns the bin totals in addition by analytic summing by bin

Pipelined Function for GDY
Package

```CREATE OR REPLACE PACKAGE Bin_Fit AS

TYPE bin_fit_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin NUMBER);
TYPE bin_fit_list_type IS VARRAY(1000) OF bin_fit_rec_type;

TYPE bin_fit_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER);
TYPE bin_fit_cur_type IS REF CURSOR RETURN bin_fit_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED;

END Bin_Fit;
/
CREATE OR REPLACE PACKAGE BODY Bin_Fit AS

c_big_value                 CONSTANT NUMBER := 100000000;
TYPE bin_fit_cur_list_type  IS VARRAY(100) OF bin_fit_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED IS

l_min_bin              PLS_INTEGER := 1;
l_min_bin_val             NUMBER;
l_bins                    SYS.ODCINumberList := SYS.ODCINumberList();
l_bin_fit_cur_rec         bin_fit_cur_rec_type;
l_bin_fit_rec             bin_fit_rec_type;
l_bin_fit_cur_list        bin_fit_cur_list_type;

BEGIN

l_bins.Extend (p_n_bins);
FOR i IN 1..p_n_bins LOOP
l_bins(i) := 0;
END LOOP;

LOOP

FETCH p_items_cur BULK COLLECT INTO l_bin_fit_cur_list LIMIT 100;
EXIT WHEN l_bin_fit_cur_list.COUNT = 0;

FOR j IN 1..l_bin_fit_cur_list.COUNT LOOP

l_bin_fit_rec.item_name := l_bin_fit_cur_list(j).item_name;
l_bin_fit_rec.item_value := l_bin_fit_cur_list(j).item_value;
l_bin_fit_rec.bin := l_min_bin;

PIPE ROW (l_bin_fit_rec);
l_bins(l_min_bin) := l_bins(l_min_bin) + l_bin_fit_cur_list(j).item_value;

l_min_bin_val := c_big_value;
FOR i IN 1..p_n_bins LOOP

IF l_bins(i) < l_min_bin_val THEN
l_min_bin := i;
l_min_bin_val := l_bins(i);
END IF;

END LOOP;

END LOOP;

END LOOP;

END Items_Binned;
```

SQL Query

```SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_value
FROM TABLE (Bin_Fit.Items_Binned (
CURSOR (SELECT item_name, item_value FROM items ORDER BY item_value DESC),
:N_BINS))
ORDER BY item_value DESC
```

The idea here is that procedural algorithms can often be implemented more efficiently in PL/SQL than in SQL.

• The first parameter to the function is a strongly-typed reference cursor
• The SQL call passes in a SELECT statement wrapped in the CURSOR keyword, so the function can be used for any set of records that returns name and numeric value pairs
• The item records are fetched in batches of 100 using the LIMIT clause to improves efficiency

Performance Testing
I tested performance of the various queries using my own benchmarking framework across grids of data points, with two data sets to split the queries into two sets based on performance.

I presented on this approach to benchmarking SQL at the Ireland Oracle User Group conference in March 2017, Dimensional Performance Benchmarking of SQL – IOUG Presentation.

Query Modifications for Performance Testing

• The RSF query with staging table was run within a pipelined function in order to easily include the insert in the timings
• A system context was used to pass the bind variables as the framework runs the queries from PL/SQL, not from SQL*Plus
• I found that calculating the bin values using analytic sums, as in the code above, affected performance, so I removed this for clarity of results, outputting only item name, value and bin

Test Data Sets
For a given depth parameter, d, random numbers were inserted within the range 0-d for d-1 records. The insert was:

``` INSERT INTO items
SELECT 'item-' || n, DBMS_Random.Value (0, p_point_deep) FROM
(SELECT LEVEL n FROM DUAL CONNECT BY LEVEL < p_point_deep);```

The number of bins was passed as a width parameter, but note that the original, linked Model solution, MODO, hard-codes the number of bins to 3.

Test Results

Data Set 1 - Small
This was used for the following queries:

• MODO - Original Model for GDY
• MODB - Brendan's Generic Model for GDY
• RSFQ - Recursive Subquery Factor for GBR
``` Depth         W3         W3         W3
Run Type=MODO
D1000       1.03       1.77       1.05
D2000       3.98       6.46       5.38
D4000      15.79       20.7      25.58
D8000      63.18      88.75      92.27
D16000      364.2     347.74     351.99
Run Type=MODB
Depth         W3         W6        W12
D1000        .27        .42        .27
D2000          1       1.58       1.59
D4000       3.86        3.8       6.19
D8000      23.26      24.57      17.19
D16000      82.29      92.04      96.02
Run Type=RSFQ
D1000       3.24       3.17       1.53
D2000       8.58       9.68       8.02
D4000      25.65      24.07      23.17
D8000      111.3     108.25      98.33
D16000     471.17     407.65     399.99
```
• Quadratic variation of CPU time with number of items
• Little variation of CPU time with number of bins, although RSFQ seems to show some decline
• RSFQ is slightly slower than MODO, while my version of Model, MODB is about 4 times faster than MODO

Data Set 2 - Large
This was used for the following queries:

• RSFT - Recursive Subquery Factor for GBR with Temporary Table
• POSS - Plain Old SQL Solution for TPA
• PLFN - Pipelined Function for GDY

This table gives the CPU times in seconds across the data set:

```  Depth       W100      W1000     W10000
Run Type=PLFN
D20000        .31       1.92      19.25
D40000        .65       3.87      55.78
D80000       1.28       7.72      92.83
D160000       2.67      16.59     214.96
D320000       5.29      38.68      418.7
D640000      11.61      84.57      823.9
Run Type=POSS
D20000        .09        .13        .13
D40000        .18        .21        .18
D80000        .27        .36         .6
D160000        .74       1.07        .83
D320000       1.36       1.58       1.58
D640000       3.13       3.97       4.04
Run Type=RSFT
D20000        .78        .78        .84
D40000       1.41       1.54        1.7
D80000       3.02       3.39       4.88
D160000       6.11       9.56       8.42
D320000      13.05      18.93      20.84
D640000      41.62      40.98      41.09
``` • Linear variation of CPU time with number of items
• Little variation of CPU time with number of bins for POSS and RSFT, but roughly linear variation for PLFN
• These linear methods are much faster than the earlier quadratic ones for larger numbers of items
• Its approximate proportionality of time to number of bins means that, while PLFN is faster than RSFT for small number of bins, it becomes slower from around 50 bins for our problem
• The proportionality to number of bins for PLFN presumably arises from the step to find the bin of minimum value
• The lack of proportionality to number of bins for RSFT may appear surprising since it performs a sort of the bins iteratively: However, while the work for this sort is likely to be proportional to the number of bins, the number of iterations is inversely proportional and thus cancels out the variation

Solution Quality

The methods reported above implement three underlying algorithms, none of which guarantees an optimal solution. In order to get an idea of how the quality compares, I created new versions of the second set of queries using analytic functions to output the difference between minimum and maximum bin values, with percentage of the maximum also output. I ran these on the same grid, and report below the results for the four corners.

```Method:			PLFN		RSFT		POSS
Point:	W100/D20000
Diff/%:			72/.004%	72/.004%	19,825/1%
Point:	W100/D640000
Diff/%:			60/.000003%	60/.000003%	633499/.03%
Point:	W10000/D20000
Diff/%:			189/.9%		180/.9%		19,995/67%
Point:	W10000/D640000
Diff/%:			695/.003%	695/.003%	639,933/3%```

The results indicate that GDY (Greedy Algorithm) and GBR (Greedy Algorithm with Batched Rebalancing) generally give very similar quality results, while TPA (Team Picking Algorithm) tends to be quite a lot worse.

Extended Problem: Finding the Number of Bins Required

An important extension to the problem is when the bins have fixed capacity, and it is desired to find the minimum number of bins, then spread the items evenly between them. As mentioned at the start, I posted extensions to two of my solutions on an OTN thread, and I reproduce them here. It turns out to be quite easy to make the extension. The remainder of this section is just lifted from my OTN post and refers to the table of the original poster.

Start OTN Extract
So how do we determine the number of bins? The total quantity divided by bin capacity, rounded up, gives a lower bound on the number of bins needed. The actual number required may be larger, but mostly it will be within a very small range from the lower bound, I believe (I suspect it will nearly always be the lower bound). A good practical solution, therefore, would be to compute the solutions for a base number, plus one or more increments, and this can be done with negligible extra work (although Model might be an exception, I haven't tried it). Then the bin totals can be computed, and the first solution that meets the constraints can be used. I took two bin sets here.

SQL POS

```WITH items AS (
SELECT sl_pm_code item_name, sl_wt item_amt, sl_qty item_qty,
Ceil (Sum(sl_qty) OVER () / :MAX_QTY) n_bins
FROM ow_ship_det
), items_desc AS (
SELECT item_name, item_amt, item_qty, n_bins,
Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins) bin_1,
Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins + 1) bin_2
FROM items
)
SELECT item_name, item_amt, item_qty,
CASE bin_1 WHEN 0 THEN n_bins ELSE bin_1 END bin_1,
CASE bin_2 WHEN 0 THEN n_bins + 1 ELSE bin_2 END bin_2,
Sum (item_amt) OVER (PARTITION BY bin_1) bin_1_amt,
Sum (item_qty) OVER (PARTITION BY bin_1) bin_1_qty,
Sum (item_amt) OVER (PARTITION BY bin_2) bin_2_amt,
Sum (item_qty) OVER (PARTITION BY bin_2) bin_2_qty
FROM items_desc
ORDER BY item_qty DESC, bin_1, bin_2```

SQL Pipelined

```SELECT osd.sl_pm_code item_name, osd.sl_wt item_amt, osd.sl_qty item_qty,
tab.bin_1, tab.bin_2,
Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_1) bin_1_amt,
Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_1) bin_1_qty,
Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_2) bin_2_amt,
Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_2) bin_2_qty
FROM ow_ship_det osd
JOIN TABLE (Bin_Even.Items_Binned (
CURSOR (SELECT sl_pm_code item_name, sl_qty item_value,
Sum(sl_qty) OVER () item_total
FROM ow_ship_det
ORDER BY sl_qty DESC, sl_wt DESC),
:MAX_QTY)) tab
ON tab.item_name = osd.sl_pm_code
ORDER BY osd.sl_qty DESC, tab.bin_1```

Pipelined Function

```CREATE OR REPLACE PACKAGE Bin_Even AS

TYPE bin_even_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin_1 NUMBER, bin_2 NUMBER);
TYPE bin_even_list_type IS VARRAY(1000) OF bin_even_rec_type;

TYPE bin_even_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, item_total NUMBER);
TYPE bin_even_cur_type IS REF CURSOR RETURN bin_even_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED;

END Bin_Even;
/
SHO ERR
CREATE OR REPLACE PACKAGE BODY Bin_Even AS

c_big_value                 CONSTANT NUMBER := 100000000;
c_n_bin_sets                CONSTANT NUMBER := 2;

TYPE bin_even_cur_list_type IS VARRAY(100) OF bin_even_cur_rec_type;
TYPE num_lol_list_type      IS VARRAY(100) OF SYS.ODCINumberList;

FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED IS

l_min_bin                 SYS.ODCINumberList := SYS.ODCINumberList (1, 1);
l_min_bin_val             SYS.ODCINumberList := SYS.ODCINumberList (c_big_value, c_big_value);
l_bins                    num_lol_list_type := num_lol_list_type (SYS.ODCINumberList(), SYS.ODCINumberList());

l_bin_even_cur_rec        bin_even_cur_rec_type;
l_bin_even_rec            bin_even_rec_type;
l_bin_even_cur_list       bin_even_cur_list_type;

l_n_bins                  PLS_INTEGER;
l_n_bins_base             PLS_INTEGER;
l_is_first_fetch          BOOLEAN := TRUE;

BEGIN

LOOP

FETCH p_items_cur BULK COLLECT INTO l_bin_even_cur_list LIMIT 100;
EXIT WHEN l_Bin_Even_cur_list.COUNT = 0;
IF l_is_first_fetch THEN

l_n_bins_base := Ceil (l_Bin_Even_cur_list(1).item_total / p_bin_max) - 1;

l_is_first_fetch := FALSE;

l_n_bins := l_n_bins_base;
FOR i IN 1..c_n_bin_sets LOOP

l_n_bins := l_n_bins + 1;
l_bins(i).Extend (l_n_bins);
FOR k IN 1..l_n_bins LOOP
l_bins(i)(k) := 0;
END LOOP;

END LOOP;

END IF;

FOR j IN 1..l_Bin_Even_cur_list.COUNT LOOP

l_bin_even_rec.item_name := l_bin_even_cur_list(j).item_name;
l_bin_even_rec.item_value := l_bin_even_cur_list(j).item_value;
l_bin_even_rec.bin_1 := l_min_bin(1);
l_bin_even_rec.bin_2 := l_min_bin(2);

PIPE ROW (l_bin_even_rec);

l_n_bins := l_n_bins_base;
FOR i IN 1..c_n_bin_sets LOOP
l_n_bins := l_n_bins + 1;
l_bins(i)(l_min_bin(i)) := l_bins(i)(l_min_bin(i)) + l_Bin_Even_cur_list(j).item_value;

l_min_bin_val(i) := c_big_value;
FOR k IN 1..l_n_bins LOOP

IF l_bins(i)(k) < l_min_bin_val(i) THEN
l_min_bin(i) := k;
l_min_bin_val(i) := l_bins(i)(k);
END IF;

END LOOP;

END LOOP;

END LOOP;

END LOOP;

END Items_Binned;

END Bin_Even;```

Output POS
Note BIN_1 means bin set 1, which turns out to have 4 bins, while bin set 2 then necessarily has 5.

```ITEM_NAME         ITEM_AMT   ITEM_QTY      BIN_1      BIN_2  BIN_1_AMT  BIN_1_QTY  BIN_2_AMT  BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080          4024        266          1          1      25562        995      17482        827
1239606-1045          1880        192          2          2      19394        886      14568        732
1239606-1044          1567        160          3          3      18115        835      14097        688
1239606-1081          2118        140          4          4      18988        793      17130        657
1239606-2094          5741         96          1          5      25562        995      18782        605
...
1239606-2107            80          3          4          2      18988        793      14568        732
1239606-2084           122          3          4          3      18988        793      14097        688
1239606-2110           210          2          2          3      19394        886      14097        688
1239606-4022           212          2          3          4      18115        835      17130        657
1239606-4021           212          2          4          5      18988        793      18782        605```

Output Pipelined

```ITEM_NAME         ITEM_AMT   ITEM_QTY      BIN_1      BIN_2  BIN_1_AMT  BIN_1_QTY  BIN_2_AMT  BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080          4024        266          1          1      20627        878      15805        703
1239606-1045          1880        192          2          2      18220        877      16176        703
1239606-1044          1567        160          3          3      20425        878      15651        701
1239606-1081          2118        140          4          4      22787        876      14797        701
1239606-2094          5741         96          4          5      22787        876      19630        701
...
1239606-2089            80          3          4          1      22787        876      15805        703
1239606-2112           141          3          4          2      22787        876      16176        703
1239606-4022           212          2          1          1      20627        878      15805        703
1239606-4021           212          2          2          1      18220        877      15805        703
1239606-2110           210          2          3          2      20425        878      16176        703```

End OTN Extract

Conclusions

• Various solutions for the balanced number partitioning problem have been presented, using Oracle's Model clause, Recursive Subquery Factoring, Pipelined Functions and simple SQL
• The performance characteristics of these solutions have been tested across a range of data sets
• As is often the case, the best solution depends on the shape and size of the data set
• A simple extension has been shown to allow determining the number of bins required in the bin-fitting interpretation of the problem
• Replacing a WITH clause with a staging table can be a useful technique to allow indexed scans

Get the code here: Brendan's repo for interesting SQL

# SQL for Network Grouping

I noticed an interesting question posted on OTN this (European) Saturday morning,
Hierarchical query to combine two groupings into one broad joint grouping. I quickly realised that the problem posed was an example of a very general class of network problems that arises quite often:

Given a set of nodes and a rule for pair-wise (non-directional) linking, obtain the set of implied networks

Usually, in response to such a problem someone will suggest a CONNECT BY query solution. Unfortunately, although hierarchical SQL techniques can be used theoretically to resolve these non-hierarchical networks, they tend to be extremely inefficient for networks of any size and are therefore often impractical. There are two problems in particular:

1. Non-hierarchical networks have no root nodes, so the traversal needs to be repeated from every node in the network set
2. Hierarchical queries retrieve all possible routes through a network

I illustrated the second problem in my last post, Notes on Profiling Oracle PL/SQL, and I intend to write a longer article on the subject of networks at a later date. The most efficient way to traverse generalised networks in Oracle involves the use of PL/SQL, such as in my Scribd article of June 2010, An Oracle Network Traversal PL SQL Program. For this article, though, I will stick to SQL-only techniques and will write down three solutions in a general format whereby the tables of the specific problem are read by initial subquery factors links_v and nodes_v that are used in the rest of the queries. I’ll save detailed explanation and performance analysis for the later article (see update at end of this section).

The three queries use two hierarchical methods and a method involving the Model clause:

1. CONNECT BY: This is the least efficient
2. Recursive subquery factors: This is more efficient than the first but still suffers from the two problems above
3. Model clause: This is intended to bypass the performance problems of hierarchical queries, but is still slower than PL/SQL

[Update, 2 September 2015] I have since written two articles on related subjects, the first, PL/SQL Pipelined Function for Network Analysis describes a PL/SQL program that traverses all networks and lists their structure. The second, Recursive SQL for Network Analysis, and Duality, uses a series of examples to illustrate and explain the different characteristics of the first two recursive SQL methods.

Problem Definition
Data Structure
I have taken the data structure of the OTN poster, made all fields character, and added three more records comprising a second isolated node (10) and a subnetwork of nodes 08 and 09. ITEM_ID is taken to be the primary key.

```SQL> SELECT *
2    FROM item_groups
3  /

ITEM_ID    GROUP1   GROUP2
---------- -------- --------
01         A        100
02         A        100
03         A        101
04         B        100
05         B        102
06         C        103
07         D        101
08         E        104
09         E        105
10         F        106

10 rows selected.
```

Grouping Structure
The poster defines two items to be linked if they share the same value for either GROUP1 or GROUP2 attributes (which could obviously be generalised to any number of attributes), and items are in the same group if they can be connected by a chain of links. Observe that if there were only one grouping attribute then the problem would be trivial as that would itself group the items. Having more than one makes it more interesting and more difficult.

A real world example of such networks can be seen to be sibling networks if one takes people as the nodes and father and mother as the attributes.

Network Diagram CONNECT BY Solution
SQL

```WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
t_to.item_id node_id_to,
t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
FROM item_groups t_fr
JOIN item_groups t_to
ON t_to.item_id > t_fr.item_id
AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
SELECT item_id node_id
FROM item_groups
), tree AS (
CONNECT BY NOCYCLE (node_id_fr = PRIOR node_id_to OR node_id_to = PRIOR node_id_fr OR
node_id_fr = PRIOR node_id_fr OR node_id_to = PRIOR node_id_to)
FROM tree
SELECT g.group_id, l.node_id_fr node_id
UNION
SELECT g.group_id, l.node_id_to
)
SELECT l.group_id "Network", l.node_id "Node"
UNION ALL
FROM nodes_v n
WHERE n.node_id NOT IN (SELECT node_id FROM linked_nodes)
ORDER BY 1, 2
```

Output

```All networks by CONNECT BY - Unlinked nodes share network id 0
Network       Node
------------- ----
10
01-1          01
02
03
04
05
07
08-1          08
09

10 rows selected.
```

Notes on CONNECT BY Solution

• For convenience I have grouped the unlinked nodes into one dummy network; it’s easy to assign them individual identifiers if desired

Recursive Subquery Factors (RSF) Solution
SQL

```WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
t_to.item_id node_id_to,
t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
FROM item_groups t_fr
JOIN item_groups t_to
ON t_to.item_id > t_fr.item_id
AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
SELECT item_id node_id
FROM item_groups
), rsf (node_id, id, root_id) AS (
SELECT node_id, NULL, node_id
FROM nodes_v
UNION ALL
SELECT CASE WHEN l.node_id_to = r.node_id THEN l.node_id_fr ELSE l.node_id_to END,
FROM rsf r
ON (l.node_id_fr = r.node_id OR l.node_id_to = r.node_id)
AND l.link_id != Nvl (r.id, '0')
) CYCLE node_id SET is_cycle TO '*' DEFAULT ' '
SELECT DISTINCT Min (root_id) OVER (PARTITION BY node_id) "Network", node_id "Node"
FROM rsf
ORDER BY 1, 2
```

Output

```All networks by RSF - Unlinked nodes have their own network ids
Network       Node
------------- ----
01            01
02
03
04
05
07
06            06
08            08
09
10            10

10 rows selected.
```

Notes on Recursive Subquery Factors (RSF) Solution

• Here I have given the unlinked nodes their own network identifiers; they could equally have been grouped together under a dummy network

Model Clause Solution
SQL

```WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
t_to.item_id node_id_to,
t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
FROM item_groups t_fr
JOIN item_groups t_to
ON t_to.item_id > t_fr.item_id
AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
SELECT item_id node_id
FROM item_groups
), lnk_iter AS (
SELECT *
CROSS JOIN (SELECT 0 iter FROM DUAL UNION SELECT 1 FROM DUAL)
), mod AS (
SELECT *
FROM lnk_iter
MODEL
DIMENSION BY (Row_Number() OVER (PARTITION BY iter ORDER BY link_id) rn, iter)
MEASURES (Row_Number() OVER (PARTITION BY iter ORDER BY link_id) id_rn, link_id id,
node_id_fr nd1, node_id_to nd2,
1 lnk_cur,
CAST ('x' AS VARCHAR2(100)) nd1_cur,
CAST ('x' AS VARCHAR2(100)) nd2_cur,
0 net_cur,
CAST (NULL AS NUMBER) net,
CAST (NULL AS NUMBER) lnk_prc,
1 not_done,
0 itnum)
RULES UPSERT ALL
ITERATE(100000) UNTIL (lnk_cur[1, Mod (iteration_number+1, 2)] IS NULL)
(
itnum[ANY, ANY] = iteration_number,
not_done[ANY, Mod (iteration_number+1, 2)] = Count (CASE WHEN net IS NULL THEN 1 END)[ANY, Mod (iteration_number, 2)],
lnk_cur[ANY, Mod (iteration_number+1, 2)] =
CASE WHEN not_done[CV(), Mod (iteration_number+1, 2)] > 0 THEN
Nvl (Min (CASE WHEN lnk_prc IS NULL AND net = net_cur THEN id_rn END)[ANY, Mod (iteration_number, 2)],
Min (CASE WHEN net IS NULL THEN id_rn END)[ANY, Mod (iteration_number, 2)])
END,
lnk_prc[ANY, Mod (iteration_number+1, 2)] = lnk_prc[CV(), Mod (iteration_number, 2)],
lnk_prc[lnk_cur[1, Mod (iteration_number+1, 2)], Mod (iteration_number+1, 2)] = 1,
net_cur[ANY, Mod (iteration_number+1, 2)] =
CASE WHEN Min (CASE WHEN lnk_prc IS NULL AND net = net_cur THEN id_rn END)[ANY, Mod (iteration_number, 2)] IS NULL THEN
net_cur[CV(), Mod (iteration_number, 2)] + 1
ELSE
net_cur[CV(), Mod (iteration_number, 2)]
END,
nd1_cur[ANY, Mod (iteration_number+1, 2)] = nd1[lnk_cur[CV(), Mod (iteration_number+1, 2)], Mod (iteration_number, 2)],
nd2_cur[ANY, Mod (iteration_number+1, 2)] = nd2[lnk_cur[CV(), Mod (iteration_number+1, 2)], Mod (iteration_number, 2)],
net[ANY, Mod (iteration_number+1, 2)] =
CASE WHEN (nd1[CV(),Mod (iteration_number+1, 2)] IN (nd1_cur[CV(),Mod (iteration_number+1, 2)], nd2_cur[CV(),Mod (iteration_number+1, 2)]) OR
nd2[CV(),Mod (iteration_number+1, 2)] IN (nd1_cur[CV(),Mod (iteration_number+1, 2)], nd2_cur[CV(),Mod (iteration_number+1, 2)]))
AND net[CV(),Mod (iteration_number, 2)] IS NULL THEN
net_cur[CV(),Mod (iteration_number+1, 2)]
ELSE
net[CV(),Mod (iteration_number, 2)]
END
)
)
SELECT To_Char (net) "Network", nd1 "Node"
FROM mod
WHERE not_done = 0
UNION
SELECT To_Char (net), nd2
FROM mod
WHERE not_done = 0
UNION ALL
FROM nodes_v n
WHERE n.node_id NOT IN (SELECT nd1 FROM mod WHERE nd1 IS NOT NULL UNION SELECT nd2 FROM mod WHERE nd2 IS NOT NULL)
ORDER by 1, 2
```

Output

```All networks by Model - Unlinked nodes share network id 00
Network       Node
------------- ----
10
1             01
02
03
04
05
07
2             08
09

10 rows selected.
```

Notes on Model Clause Solution

• For convenience I have grouped the unlinked nodes into one dummy network; it’s easy to assign them individual identifiers if desired
• My Cyclic Iteration technique used here appears to be novel

Conclusions

• It is always advisable with a new problem in SQL to consider whether it falls into a general class of problems for which solutions have already been found
• Three solution methods for network resolution in pure SQL have been presented and demonstrated on a small test problem; the performance issues mentioned should be considered carefully before applying them on larger problems
• The Model clause solution is likely to be the most efficient on larger, looped networks, but if better performance is required then PL/SQL recursion-based methods would be faster
• For smaller problems with few loops the simpler method of recursive subquery factors may be preferred, or, for versions prior to v11.2, CONNECT BY

# Grouping by Unique Subsequences in SQL

Recently a deceptively simple question was asked on Oracle’s OTN PL/SQL forum that most posters initially misunderstood, myself included (Grouping Non Consecutive Rows). The question requested a solution for the problem of assigning group numbers to a sequence of records such that each group held no duplicate values of a particular field, and each group was as long as possible. One of the posters produced a solution involving the Model clause and I tried to find a solution without using that clause, just to see if it could be done. Having found such a solution, not without some difficulty, I then solved the problem via a pipelined PL/SQL function having felt that that would be more efficient than straight SQL.

I think all three solutions have their own interest, especially since forum posters frequently assert, incorrectly, that SQL solutions are always faster than those using PL/SQL. I’ll explain how each solution works in this article and will run them through my benchmarking framework. We’ll see that the PL/SQL solution time increases linearly with sequence length, while the others increase quadratically, and PL/SQL is orders of magnitude faster for even small problem sizes. I used Oracle Database v11.2 XE running on a Samsung X120 dedicated PC.

I use my Query Structure Diagramming technique to illustrate the SQL solutions.

Functional Test Data
The problem data structure is quite simple, and I have defined my own version to emphasise generality, and to include partitioning, which was not part of the initial posting. There is one table with a two-column primary key, and I added an index on the partition field plus the value field.

Table Value_Subseqs

 Column Type part_key* Char(10) ind* Number val Char(10)

Index VSB_N1

• part_key
• val

Test Cases
Functional testing was done on a number of simple sequences, including the sequence of single characters, ABABB, which is used to illustrate the first solution query.

SQL Pairs Solution

How It Works
The first solution for this problem starts by obtaining the possible start and end points of the groups: all pairs that contain no duplicate. In the table below, the rows represent the possible pairs with X marking each value in the ranges. It’s easy to see the solution for this small test problem and the correct pairs are highlighted in yellow.

From the table it’s possible to work out a solution in SQL, given the pair set. The first correct pair is the longest starting from 1, while each subsequent correct pair is the longest beginning one element after the previous ends. This is of course a kind of tree-walk, which leads to the following solution steps (where everything is implicitly partitioned):

1. Within a subquery factor, self-join the table with the second record having a higher ind than the first
2. Add a NOT EXISTS subquery that checks for duplicate val in any pair between the outer potantial start and end point pair, using another self-join
3. Add in the single element pairs via a UNION
4. Within another subquery factor, obtain the rank of each pair in terms of its length descending, as well as the starting ind value (if it might not always be 1)
5. In another subquery factor perform the tree-walk as mentioned, using the condition ‘rank = 1’ to include only the longest ranges from any point
6. Include the the analytic function Row_Number in the Select list, which will be the group number
7. The main query selects from the last subquery factor, then joins the original table

Notes
The query is of course likely to be performance-intensive owing to the initial subquery factor with its nested self-joins across ranges.

Query Diagram Notes
The diagram notation follows and extends notation developed earlier, including my previous blog article, and is intended to be largely self-explanatory. In this diagram, I have added a new notation for joins that are not simple foreign key joins, in which I label the join and use a note to explain it.
Oracle v11.2 introduced Recursive Subquery Factors that extend tree-walk functionality. I used the older Connect By syntax in the query since it works on older versions, but found it easier to represent in the diagram as though it were the newer implementation – the diagram shows what’s happening logically.

SQL

```WITH sqf AS (
SELECT p1.part_key, p1.ind ind_1, p2.ind ind_2
FROM value_subseqs p1
JOIN value_subseqs p2
ON p2.ind           > p1.ind
AND p2.part_key      = p1.part_key
AND NOT EXISTS (SELECT 1
FROM value_subseqs p3
JOIN value_subseqs p4
ON p4.val        = p3.val
AND p4.part_key   = p3.part_key
WHERE p3.ind        BETWEEN p1.ind AND p2.ind
AND p4.ind        BETWEEN p3.ind + 1 AND p2.ind
AND p3.part_key   = p1.part_key
AND p4.part_key   = p2.part_key)
UNION ALL
SELECT part_key, ind, ind
FROM value_subseqs p1
), rnk AS (
SELECT part_key, ind_1, ind_2,
Row_Number() OVER (PARTITION BY part_key, ind_1 ORDER BY ind_2 - ind_1 DESC) rn,
Min(ind_1) OVER (PARTITION BY part_key) ind_1_beg
FROM sqf
), grp AS (
SELECT part_key, ind_1, ind_2, Row_Number() OVER
(PARTITION BY part_key ORDER BY ind_1, ind_2) grp_no
FROM rnk
CONNECT BY ind_1 = PRIOR ind_2 + 1
AND part_key = PRIOR part_key
AND rn = 1
)
SELECT
p.part_key,
p.ind,
p.val,
g.grp_no
FROM grp g
JOIN value_subseqs p
ON p.ind            BETWEEN g.ind_1 AND g.ind_2
AND p.part_key       = g.part_key
ORDER BY p.part_key, p.ind```

Model Solution

How It Works
In the OTN thread several solutions were proposed that used Oracle’s Model clause or recursive subquery factoring, but I think only one was a general solution, since the others used string variables to successively concatenate strings over arbitrary numbers of rows and would break when the 4000 character SQL limit is hit.
The general Model solution (which was not by me, but I’ve reformatted it and applied it to my table) worked by defining two measures, for group start indices and group number. The rules specified two passes for the two measures: The first pass counts the distinct values and compares with the count of all values, between the previous group start and the current index; the second uses the group starts to set the group numbers.

1. Form the basic Select, with all the table columns required, and append a group number placeholder
2. Add the Model keyword, partitioning by part_key, dimensioning by analytic function Row_Number, ordering by ind within part_key, with val, group start and group number as measures
3. Initialise group start and group number to 1 in the measures clause
4. Define the first rule to obtain the group start date for all rows after the first as the previous group start, unless there is a difference between the two counts, in which case take the new index.
5. Define the second rule to obtain the group number for all rows as the previous group number, unless the group start has changed, in which case take the previous group number + 1.

Query Diagram Notes
Queries with the Model clause have a structure that is rather different from other queries, and the diagram attempts to reflect that structure for these problems. The main query feeds its output into an array processing component with a set of rules that specify how any additional data items (called measures) are to be calculated, in a mostly declarative fashion.
The model box above contains 4 specification types:

• Partition – processing is to be performed separately by one or more columns; the same meaning as in analytic functions
• Dimension – columns by which the array is dimensioned; can included analytic functions, as here
• Measures – remaining columns that may be calculated or updated by the rules, possibly including placeholders from the main query
• Rules – a set of rules that specify measure calculation; rules are processed sequentially, unless otherwise specified; in the diagram:
• n – the current dimension value
• F(n-1,n) – denotes that the value depends on values from previous and current rows (and so on, ‘..’ denotes a range)
• ^ – denotes that the calculation progresses in ascending order by dimension; this is the default so does not have to be coded

SQL

```SELECT
part_key,
rn,
val,
g grp_no
FROM value_subseqs
MODEL
PARTITION BY (part_key)
DIMENSION BY (Row_number() OVER (PARTITION BY part_key ORDER BY ind) rn)
MEASURES (val, 1 g, 1 s)
RULES (
s[rn > 1] = CASE COUNT (DISTINCT val)[rn BETWEEN s[CV() - 1] AND CV()]
WHEN COUNT (val)[rn BETWEEN s[CV() - 1] AND cv()] THEN s[cv() - 1]
ELSE CV(rn)
END,
g[rn > 1] = CASE s[CV()]
WHEN s[CV() - 1] THEN g[CV() - 1]
ELSE g[CV() - 1] + 1
END
)
ORDER BY part_key, rn```

Pipelined Function Hash Solution

How It Works
This approach is based on pipelined database functions, which are specified to return array types. Pipelining means that Oracle transparently returns the records in batches while processing continues, thus avoiding memory problems, and returning initial rows more quickly. See Pipelined Functions (AskTom) for some other examples of use.
Within the function there is a simple cursor loop over the ordered sequence. A PL/SQL index-by array stores values for the current group, and allows duplicate checking to take place without any additional searching or sorting. The array is reset whenever the group changes.

Types
Two database types are specified, the first being an object with fields for the table columns and an extra field for the group to be derived; the second is an array of the nested table form with elements of the first type.

Function Pseudocode

```Loop over a cursor selecting the records in order
If the partition field value changes then
Reset group and index-by array
Else if the current value is already in the index-by array then
Increment group number and reset index-by array
End if
Add the value to the index-by array
Pipe the row out
End loop```

Function Definition (within package)

```FUNCTION Hash_Array RETURN value_subseq_list_type PIPELINED IS

TYPE value_list_type      IS TABLE OF PLS_INTEGER INDEX BY VARCHAR2(4000);
l_value_list_NULL         value_list_type;
l_value_list              value_list_type;
l_group_no                PLS_INTEGER;
old_part_key              VARCHAR2(10);

BEGIN

FOR r_val IN (SELECT part_key, ind, val FROM value_subseqs ORDER BY part_key, ind) LOOP

IF r_val.part_key != old_part_key OR old_part_key IS NULL THEN

old_part_key := r_val.part_key;
l_group_no := 1;
l_value_list := l_value_list_NULL;

ELSIF l_value_list.Exists (r_val.val) THEN

l_group_no := l_group_no + 1;
l_value_list := l_value_list_NULL;

END IF;

l_value_list (r_val.val) := 1;
PIPE ROW (value_subseq_type (r_val.part_key, r_val.ind, r_val.val, l_group_no));

END LOOP;

END Hash_Array;```

SQL
Just select the fields named in the record from the function wrapped in the TABLE keyword.

```SELECT
part_key,
ind,
val,
grp_no
FROM TABLE (Subseq_Groups.Hash_Array)
ORDER BY part_key, ind```

Pipelined Function Sim Solution
This solution was added after performance testing on the the first data set, below. It is a sort of hybrid between the Model and pipelined function approach created to try to understand the apparent quadratic variation of Model with sequence length noted in testing.

How It Works
This function uses the same database types as the Hash solution, but with the same counting idea as the Model solution within an old-fashioned nested cursor structure that one would not expect to perform efficiently.

Function Pseudocode

```Loop over a cursor selecting the records in order
If the partition field value changes then
Reset group and group starting index
Else
Select counts of val and distinct val from the table between the group starting and current indices
If difference in counts then
Increment group number and reset group starting index
End if
End if
Pipe the row out
End loop```

Function Definition (within package)

```FUNCTION Sim_Model RETURN value_subseq_list_type PIPELINED IS

l_group_no                PLS_INTEGER;
l_ind_beg                 PLS_INTEGER;
l_is_dup                  PLS_INTEGER;
old_part_key              VARCHAR2(10);

BEGIN

FOR r_val IN (SELECT part_key, ind, val FROM value_subseqs ORDER BY part_key, ind) LOOP

IF r_val.part_key != old_part_key OR old_part_key IS NULL THEN

old_part_key := r_val.part_key;
l_group_no := 1;
l_ind_beg := r_val.ind;

ELSE

SELECT Count(val) - Count(DISTINCT val)
INTO l_is_dup
FROM value_subseqs
WHERE part_key = r_val.part_key
AND ind      BETWEEN l_ind_beg AND r_val.ind;

IF l_is_dup > 0 THEN

l_group_no := l_group_no + 1;
l_ind_beg := r_val.ind;

END IF;

END IF;

PIPE ROW (value_subseq_type (r_val.part_key, r_val.ind, r_val.val, l_group_no));

END LOOP;

END Sim_Model;```

SQL
Just select the fields named in the record from the function wrapped in the TABLE keyword.

```SELECT
part_key,
ind,
val,
grp_no
FROM TABLE (Subseq_Groups.Sim_Model)
ORDER BY part_key, ind```

Performance Analysis
In SQL Pivot and Prune Queries – Keeping an Eye on Performance, in May 2011, I first applied an approach to performance testing of SQL queries whereby the queries are tested across a 2-dimensional domain, using a testing framework developed for that work. The same approach has been followed here, although the framework has been substantially improved and extended since then. In this case it may be of interest to observe performance across the following two dimensions:

• Number of records per partition key
• Average group size

However, as I prefer to randomise the test data, the group size is known only after querying the data set, so I will use a proxy dimension instead. The value field will be a string of 20 random capital letters (A-T) of length equal to the proxy dimension. As this length increases so will the average group size. Generally execution time would be expected to be proportional to number of partition keys when the other dimensions are fixed, and I will use 2 partition key values throughout.

Data Set 1
The first data set is chosen to keep the CPU times within reason for all queries, which limits the possible ranges (we’ll eliminate one query later and extend the ranges).

Output Record Counts

 Depth/Width W1 W2 W4 Records/Part> 100 200 400 D1 5 6 5 D2 25 24 24 D3 100 80 89 D4 100 200 267

CPU Times (Seconds)

Query

W1

W2

W4

W/Prior W Average

SQL Pairs

D1

2.11

8.49

33.99

4

D2

4.88

13.66

42.15

2.9

D3

10.3

46.71

255.64

5

D4

10.33

78.25

595.58

7.6

D1

0.32

1.08

4.23

3.6

D2

0.31

1.14

4.23

3.7

D3

0.34

1.16

4.65

3.7

D4

0.36

1.36

4.86

3.7

##### Hash

D1

0.03

0.03

0.03

1

D2

0

0.01

0.03

2

D3

0.01

0.01

0.02

1.5

D4

0.02

0.01

0.02

1.3

Discussion

We can see immediately that on all data points there is the same performance ranking of the three queries and the differences are extreme. On W4-D4, SQL Pairs takes 123 times as long as Model, which in turn takes 243 times as long as Hash.

SQL Pairs
The Average Ratio figure in the table above is the average ratio between successive columns across the width dimension. On D1 this is 4, meaning that the CPU time has risen by the square of the width factor increase. On D8 it’s 7.6, being almost the cube of the factor. It appears that the performance varies with the square of the sequence length, except when the group size reaches the sequence length, when it becomes the cube. We would not consider this query for real problems when faster alternatives exist.

Model
The Average Ratio figure on all depths is about 3.7, meaning that the CPU time has risen by almost the square of the width factor increase. This is very surprising because, considering the algorithm one would assume to be effected by our query, if the group size remains constant then the work done in computing each group ought to be constant too, and the total work ought to rise by the same factor as the sequence length. We’ll look at this further in our second, larger data set, where we’ll also consider an apparently similar algorithm implemented in PL/SQL.

Hash
The Average Ratio figure varies between 1 and 2, but the CPU times are really too small for the figure to be considered reliable (note that the zero figure for W1-D2 was replaced by 0.005 in order to allow the log graph to include it). We can safely say, though, that this is faster by orders of magnitude even on very small problems, and will again look at a larger data set to see whether more can be said.

Data Set 2 (Second Query Set)
The second data set is chosen to give wider ranges, after excluding the Pairs query. A second function was added to replace it, labelled ‘Sim’ below and described above.

Output Record Counts

 Depth/Width W1 W2 W4 W8 W16 W32 Records/Part> 100 200 400 800 1600 3200 D1 5 5 5 5 5 5 D2 18 25 23 24 25 23 D3 50 80 89 84 119 110 D4 100 200 200 267 356 582 D5 100 200 400 533 1600 2133 D6 100 200 400 800 1600 3200

CPU Times (Seconds)

 Query W1 W2 W4 W8 W16 W32 W/Prior W Average Hash D1 0.02 0.01 0.01 0.03 0.06 0.10 1.6 D2 0.02 0.01 0.02 0.04 0.06 0.09 1.5 D3 0.02 0.02 0.02 0.04 0.06 0.09 1.4 D4 0.01 0.02 0.01 0.03 0.06 0.11 1.9 D5 0.01 0.03 0.02 0.05 0.06 0.09 1.8 D6 0.01 0.02 0.01 0.05 0.06 0.09 2.0 Model D1 0.28 1.01 4.05 16.28 63.34 257.46 3.9 D2 0.27 1.06 4.00 15.97 66.27 249.99 3.9 D3 0.29 1.14 4.24 16.28 62.91 250.50 3.9 D4 0.33 1.24 4.74 17.66 69.00 263.34 3.8 D5 0.33 1.26 5.10 19.17 81.59 314.39 3.9 D6 0.33 1.28 4.99 20.55 80.80 331.38 4.0 Sim D1 0.25 0.37 0.74 1.70 3.14 6.18 1.9 D2 0.28 0.59 1.21 2.42 4.73 9.28 2.0 D3 0.33 0.67 1.43 2.79 5.86 11.28 2.0 D4 0.34 0.77 1.58 3.15 6.97 14.71 2.1 D5 0.36 0.73 1.69 3.61 9.64 22.58 2.3 D6 0.36 0.73 1.71 3.79 9.43 26.45 2.4

Discussion

We can see immediately that on all data points there is the same performance ranking of the three queries and the differences are extreme. On W32-D6, Model takes 12 times as long as Sim, which in turn takes 294 times as long as Hash, Model being 3,678 slower than Hash. (I have included the elapsed times, which are very close to the CPU times, in the shared XL file above. I think the table data are being read from buffer cache throughout).

Model
The Average Ratio figure on all depths is about 3.9, meaning that the CPU time has risen by almost the square of the width factor increase. It seems that the internal algorithm applied from the Model query here is doing something different from what we would expect, and with dire consequences for performance. For what it’s worth, here is the last Execution Plan:

```---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |               |      1 |        |   6400 |00:05:33.45 |      30 |       |       |          |
|   1 |  SORT ORDER BY       |               |      1 |   6437 |   6400 |00:05:33.45 |      30 |   478K|   448K|  424K (0)|
|   2 |   SQL MODEL ORDERED  |               |      1 |   6437 |   6400 |05:57:01.43 |      30 |  1156K|   974K| 1002K (0)|
|   3 |    WINDOW SORT       |               |      1 |   6437 |   6400 |00:00:00.03 |      30 |   337K|   337K|  299K (0)|
|   4 |     TABLE ACCESS FULL| VALUE_SUBSEQS |      1 |   6437 |   6400 |00:00:00.01 |      30 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------```

Hash
The Average Ratio figure varies between 1.4 and 2, but the CPU times are probaby still too small for the figure to be considered reliable. We can again say, though, that this is faster than the Model solution by orders of magnitude even on very small problems and that the CPU time does not appear to be rising faster than linearly, so that the performance advantage will increase with problem size.

Sim
The Average Ratio figure varies between 1.9 and 2.4, and up to D3 is not more than 2, which indicates a pretty exact proportionality between sequence length and CPU time. This is consistent with our expectation of linearity so long as the group sizes are smaller than sequence lengths. For D6, where the group sizes are the same as the sequence lengths, we would expect to see a quadratic term (number of counts doubles, and work done in sorting/counting also doubles, if that is linear), and time in fact trebles between W16 and W32.
The results from this solution support the view that there is some internal implementation problem with Model for this example that is causing its quadratic CPU time variation. In my August 2011 revision of Forming Range-Based Break Groups with Advanced SQL, I noted significant under-performance in a query using analytic functions, that also seemed to due to an internal implementation problem, and found a work-around that made the query perform as expected. I believe both examples illustrate well the power of this kind of dimensional performance analysis.

Model Clause vs Pipelined Functions
I googled model performance problems and the two top-ranked articles are briefly discussed in the first two subsections below.
MODEL Performance Tuning
The author states: ‘In some cases MODEL query performance can even be so poor that it renders the query unusable. One of the keys to writing efficient and scalable MODEL queries seems to be keeping session memory use to a minimum’. Our first case would seem to fall into one of those cases, Model being 3,678 times slower than the function, at 332 seconds, on a problem of only 3,200 records (times two partition keys) and rising quadratically. The article considers the effects of changing various features of a test Model problem on the memory usage, assuming that that is one cause of poor performance.
From Pipelined Function to Model
This article is interesting in that it takes pretty much the opposite line to my own. For a different problem, the author started with a pipelined function solution, and then went to Model on performance grounds. He says: ‘Since we were returning the rows in a pipelined (streaming) fashion, the performance was fine initially. It was when the function was called constantly and then joined with other tables that we ran into trouble’. The problem he identifies seems to be principally the fact that Oracle’s Cost Based Optimiser (CBO) cannot accurately predict the cardinality of the function, and assigns a default, on his system (as on mine) of 8168. This can cause poor execution plans when joined with other tables. His initial solution was to use the CARDINALITY hint, which worked but is undocumented and inflexible. He also notes possible performance issues caused by Oracle’s translating the PL/SQL table into an SQL result set and goes on to propose a Model solution to avoid this problem. Unfortunately, the author does not provide any results on significant data sets to demonstrate the performance differences. The following article looks specifically at the cardinality issue.
setting cardinality for pipelined and table functions
The author (Adrian Billington) considers four techniques that he labels thus:

• CARDINALITY hint (9i+) undocumented
• OPT_ESTIMATE hint (10g+) undocumented
• DYNAMIC_SAMPLING hint (11.1.0.7+)
• Extensible Optimiser (10g+)

The first two are not recommended on the grounds of being undocumented. The third option appears quite a lot simpler than the fourth and I will look at that approach in a second example problem, in my next article List Aggregation in Oracle – Comparing Three Methods.

Discussion
The pipelined function approach is plainly much faster than the SQL solutions in this example, but one has to be aware of possible issues such as the cardinality issue mentioned. One also needs to be aware that pure SQL statements are ‘read-consistent’ in Oracle, but this is not the case when functions are called that themselves do SQL.
Context switches between the SQL and PL/SQL engines, as well as the work done in translating between collections and SQL record sets, are often cited as performance reasons for preferring SQL-only solutions. As we have seen though, these issues, while real, can be dwarfed by algorithmic differences.

Conclusions

• For the subsequence grouping problem addressed, using a pipelined function is faster by far than the SQL-only solutions identified
• Although widely asserted, the notion that any query processing will be executed more efficiently in pure SQL than in PL/SQL is a myth
• The Model solution using embedded aggregate Counts is much slower than expected and its quadratic CPU variation suggests performance problems within Oracle’s internal implementation of the Model clause
• Dimensional performance analysis is very powerful although its use appears to be extremely rare in the Oracle community
• It is suggested that diagrammatic techniques, such as my Query Structure Diagramming, although also very rarely used, offer important advantages for query documentation and design