SQL for the Balanced Number Partitioning Problem

I noticed a post on AskTom recently that referred to an SQL solution to a version of the so-called Bin Fitting problem, where even distribution is the aim. The solution, How do I solve a Bin Fitting problem in an SQL statement?, uses Oracle’s Model clause, and, as the poster of the link observed, has the drawback that the number of bins is embedded in the query structure. I thought it might be interesting to find solutions without that drawback, so that the number of bins could be passed to the query as a bind variable. I came up with three solutions using different techniques, starting here.

An interesting article in American Scientist, The Easiest Hard Problem, notes that the problem is NP-complete, or certifiably hard, but that simple greedy heuristics often produce a good solution, including one used by schoolboys to pick football teams. The article uses the more descriptive term for the problem of balanced number partitioning, and notes some practical applications. The Model clause solution implements a multiple-bin version of the main Greedy Algorithm, while my non-Model SQL solutions implement variants of it that allow other techniques to be used, one of which is very simple and fast: this implements the team picking heuristic for multiple teams.

Another poster, Stew Ashton, suggested a simple change to my Model solution that improved performance, and I use this modified version here. He also suggested that using PL/SQL might be faster, and I have added my own simple PL/SQL implementation of the Greedy Algorithm, as well as a second version of the recursive subquery factoring solution that performs better than the first.

This article explains the solutions, considers two simple examples to illustrate them, and reports on performance testing across dimensions of number of items and number of bins. These show that the solutions exhibit either linear or quadratic variation in execution time with number of items, and some methods are sensitive to the number of bins while others are not.

After I had posted my solutions on the AskTom thread, I came across a thread on OTN, need help to resolve this issue, that requested a solution to a form of bin fitting problem where the bins have fixed capacity and the number of bins required must be determined. I realised that my solutions could easily be extended to add that feature, and posted extended versions of two of the solutions there. I have added a section here for this.

Updated, 5 June 2013: added Model and RSF diagrams

Update, 18 November 2017: I have now put scripts for setting up data and running the queries in a new schema onto my GitHub project: Brendan’s repo for interesting SQL. Note that I have not included the minor changes needed for the extended problem where findiing the number of bins is part of the problem.

Greedy Algorithm Variants

Say there are N bins and M items.

Greedy Algorithm (GDY)
Set bin sizes zero
Loop over items in descending order of size

  • Add item to current smallest bin
  • Calculate new bin size

End Loop

Greedy Algorithm with Batched Rebalancing (GBR)
Set bin sizes zero
Loop over items in descending order of size in batches of N items

  • Assign batch to N bins, with bins in ascending order of size
  • Calculate new bin sizes

End Loop

Greedy Algorithm with No Rebalancing – or, Team Picking Algorithm (TPA)
Assign items to bins cyclically by bin sequence in descending order of item size

Two Examples

Example: Four Items
Binfit, v1.3 - 4-items
Here we see that the Greedy Algorithm finds the perfect solution, with no difference in bin size, but the two variants have a difference of two.
Example: Six Items
Binfit, v1.3 - 6-items
Here we see that none of the algorithms finds the perfect solution. Both the standard Greedy Algorithm and its batched variant give a difference of two, while the variant without rebalancing gives a difference of four.

SQL Solutions

Original Model for GDY
See the link above for the SQL for the problem with three bins only.

The author has two measures for each bin and implements the GDY algorithm using CASE expressions and aggregation within the rules. The idea is to iterate over the items in descending order of size, setting the item bin to the bin with current smallest value. I use the word ‘bin’ for his ‘bucket’. Some notes:

  • Dimension by row number, ordered by item value
  • Add measures for the iteration, it, and number of iterations required, counter
  • Add measures for the bin name, bucket_name, and current minimum bin value, min_tmp (only first entry used)
  • Add measures for each item bin value, bucket_1-3, being the item value if it’s in that bin, else zero
  • Add measures for each bin running sum, pbucket_1-3, being the current value of each bin (only first two entries used)
  • The current minimum bin value, bin_tmp[1] is computed as the least of the running sums
  • The current item bin value is set to the item value for the bin whose value matches the minimum just computed, and null for the others
  • The current bin name is set similarly to be the bin matching the minimum
  • The new running sums are computed for each bin

Brendan’s Generic Model for GDY

SELECT item_name, bin, item_value, Max (bin_value) OVER (PARTITION BY bin) bin_value
  FROM (
SELECT * FROM items
  MODEL 
    DIMENSION BY (Row_Number() OVER (ORDER BY item_value DESC) rn)
    MEASURES (item_name, 
              item_value,
              Row_Number() OVER (ORDER BY item_value DESC) bin,
              item_value bin_value,
              Row_Number() OVER (ORDER BY item_value DESC) rn_m,
              0 min_bin,
              Count(*) OVER () - :N_BINS - 1 n_iters
    )
    RULES ITERATE(100000) UNTIL (ITERATION_NUMBER >= n_iters[1]) (
      min_bin[1] = Min(rn_m) KEEP (DENSE_RANK FIRST ORDER BY bin_value)[rn <= :N_BINS],
      bin[ITERATION_NUMBER + :N_BINS + 1] = min_bin[1],
      bin_value[min_bin[1]] = bin_value[CV()] + Nvl (item_value[ITERATION_NUMBER + :N_BINS + 1], 0)
    )
)
 WHERE item_name IS NOT NULL
 ORDER BY item_value DESC

My Model solution works for any number of bins, passing the number of bins as a bind variable. The key idea here is to use values in the first N rows of a generic bin value measure to store all the running bin values, rather than as individual measures. I have included two modifications suggested by Stew in the AskTom thread.

  • Dimension by row number, ordered by item value
  • Initialise a bin measure to the row number (the first N items will remain fixed)
  • Initialise a bin value measure to item value (only first N entries used)
  • Add the row number as a measure, rn_m, in addition to a dimension, for referencing purposes
  • Add a min_bin measure for current minimum bin index (first entry only)
  • Add a measure for the number of iterations required, n_iters
  • The first N items are correctly binned in the measure initialisation
  • Set the minimum bin index using analytic Min function with KEEP clause over the first N rows of bin value
  • Set the bin for the current item to this index
  • Update the bin value for the corresponding bin only

Binfit, v1.3 - MOD

Recursive Subquery Factor for GBR

WITH bins AS (
       SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), items_desc AS (
       SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
         FROM items
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
       i.item_name, 
       i.item_value, 
       i.item_value,
       1,
       b.n_bins - i.rn + 1,
       b.n_bins
  FROM bins b
  JOIN items_desc i
    ON i.rn = b.bin
 UNION ALL
SELECT r.bin,
       i.item_name, 
       i.item_value, 
       r.bin_value + i.item_value,
       r.lev + 1,
       Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
       r.n_bins
  FROM rsf r
  JOIN items_desc i
    ON i.rn = r.bin_rank + r.lev * r.n_bins
)
SELECT r.item_name,
       r.bin, r.item_value, r.bin_value
  FROM rsf r
 ORDER BY item_value DESC

The idea here is to use recursive subquery factors to iterate through the items in batches of N items, assigning each item to a bin according to the rank of the bin on the previous iteration.

  • Initial subquery factors form record sets for the bins and for the items with their ranks in descending order of value
  • The anchor branch assign bins to the first N items, assigning the item values to a bin value field, and setting the bin rank in ascending order of this bin value
  • The recursive branch joins the batch of items to the record in the previous batch whose bin rank matches that of the item in the reverse sense (so largest item goes to smallest bin etc.)
  • The analytic Row_Number function computes the updated bin ranks, and the bin values are updated by simple addition

Binfit, v1.3 - RSF

Recursive Subquery Factor for GBR with Temporary Table
Create Table and Index

DROP TABLE items_desc_temp
/
CREATE GLOBAL TEMPORARY TABLE items_desc_temp (
   item_name  VARCHAR2(30) NOT NULL,  
   item_value NUMBER(8) NOT NULL,
   rn         NUMBER
)
ON COMMIT DELETE ROWS
/
CREATE INDEX items_desc_temp_N1 ON items_desc_temp (rn)
/

Insert into Temporary Table

INSERT INTO items_desc_temp
SELECT item_name, item_value, Row_Number () OVER (ORDER BY item_value DESC) rn
  FROM items;

RSF Query with Temporary Table

WITH bins AS (
       SELECT LEVEL bin, :N_BINS n_bins FROM DUAL CONNECT BY LEVEL <= :N_BINS
), rsf (bin, item_name, item_value, bin_value, lev, bin_rank, n_bins) AS (
SELECT b.bin,
       i.item_name, 
       i.item_value, 
       i.item_value,
       1,
       b.n_bins - i.rn + 1,
       b.n_bins
  FROM bins b
  JOIN items_desc_temp i
    ON i.rn = b.bin
 UNION ALL
SELECT r.bin,
       i.item_name, 
       i.item_value, 
       r.bin_value + i.item_value,
       r.lev + 1,
       Row_Number () OVER (ORDER BY r.bin_value + i.item_value),
       r.n_bins
  FROM rsf r
  JOIN items_desc_temp i
    ON i.rn = r.bin_rank + r.lev * r.n_bins
)
SELECT item_name, bin, item_value, bin_value
  FROM rsf
 ORDER BY item_value DESC

The idea here is that in the initial RSF query a subquery factor of items was joined on a calculated field, so the whole record set had to be read, and performance could be improved by putting that initial record set into an indexed temporary table ahead of the main query. We'll see in the performance testing section that this changes quadratic variation with problem size into linear variation.

Plain Old SQL Solution for TPA

WITH items_desc AS (
       SELECT item_name, item_value, 
              Mod (Row_Number () OVER (ORDER BY item_value DESC), :N_BINS) + 1 bin
         FROM items
)
SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_total
  FROM items_desc
 ORDER BY item_value DESC

The idea here is that the TPA algorithm can be implemented in simple SQL using analyic functions.

  • The subquery factor assigns the bins by taking the item rank in descending order of value and applying the modulo (N) function
  • The main query returns the bin totals in addition by analytic summing by bin

Pipelined Function for GDY
Package

CREATE OR REPLACE PACKAGE Bin_Fit AS

TYPE bin_fit_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin NUMBER);
TYPE bin_fit_list_type IS VARRAY(1000) OF bin_fit_rec_type;

TYPE bin_fit_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER);
TYPE bin_fit_cur_type IS REF CURSOR RETURN bin_fit_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED;

END Bin_Fit;
/
CREATE OR REPLACE PACKAGE BODY Bin_Fit AS

c_big_value                 CONSTANT NUMBER := 100000000;
TYPE bin_fit_cur_list_type  IS VARRAY(100) OF bin_fit_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_fit_cur_type, p_n_bins PLS_INTEGER) RETURN bin_fit_list_type PIPELINED IS

  l_min_bin              PLS_INTEGER := 1;
  l_min_bin_val             NUMBER;
  l_bins                    SYS.ODCINumberList := SYS.ODCINumberList();
  l_bin_fit_cur_rec         bin_fit_cur_rec_type;
  l_bin_fit_rec             bin_fit_rec_type;
  l_bin_fit_cur_list        bin_fit_cur_list_type;

BEGIN

  l_bins.Extend (p_n_bins);
  FOR i IN 1..p_n_bins LOOP
    l_bins(i) := 0;
  END LOOP;

  LOOP

    FETCH p_items_cur BULK COLLECT INTO l_bin_fit_cur_list LIMIT 100;
    EXIT WHEN l_bin_fit_cur_list.COUNT = 0;

    FOR j IN 1..l_bin_fit_cur_list.COUNT LOOP

      l_bin_fit_rec.item_name := l_bin_fit_cur_list(j).item_name;
      l_bin_fit_rec.item_value := l_bin_fit_cur_list(j).item_value;
      l_bin_fit_rec.bin := l_min_bin;

      PIPE ROW (l_bin_fit_rec);
      l_bins(l_min_bin) := l_bins(l_min_bin) + l_bin_fit_cur_list(j).item_value;

      l_min_bin_val := c_big_value;
      FOR i IN 1..p_n_bins LOOP

        IF l_bins(i) < l_min_bin_val THEN
          l_min_bin := i;
          l_min_bin_val := l_bins(i);
        END IF;

      END LOOP;

    END LOOP;

  END LOOP;

END Items_Binned;

SQL Query

SELECT item_name, bin, item_value, Sum (item_value) OVER (PARTITION BY bin) bin_value
  FROM TABLE (Bin_Fit.Items_Binned (
                     CURSOR (SELECT item_name, item_value FROM items ORDER BY item_value DESC), 
                     :N_BINS))
 ORDER BY item_value DESC

The idea here is that procedural algorithms can often be implemented more efficiently in PL/SQL than in SQL.

  • The first parameter to the function is a strongly-typed reference cursor
  • The SQL call passes in a SELECT statement wrapped in the CURSOR keyword, so the function can be used for any set of records that returns name and numeric value pairs
  • The item records are fetched in batches of 100 using the LIMIT clause to improves efficiency

Performance Testing
I tested performance of the various queries using my own benchmarking framework across grids of data points, with two data sets to split the queries into two sets based on performance.

Query Modifications for Performance Testing

  • The RSF query with staging table was run within a pipelined function in order to easily include the insert in the timings
  • A system context was used to pass the bind variables as the framework runs the queries from PL/SQL, not from SQL*Plus
  • I found that calculating the bin values using analytic sums, as in the code above, affected performance, so I removed this for clarity of results, outputting only item name, value and bin

Test Data Sets
For a given depth parameter, d, random numbers were inserted within the range 0-d for d-1 records. The insert was:

 INSERT INTO items
  SELECT 'item-' || n, DBMS_Random.Value (0, p_point_deep) FROM  
  (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL < p_point_deep);

The number of bins was passed as a width parameter, but note that the original, linked Model solution, MODO, hard-codes the number of bins to 3.

Test Results

Data Set 1 - Small
This was used for the following queries:

  • MODO - Original Model for GDY
  • MODB - Brendan's Generic Model for GDY
  • RSFQ - Recursive Subquery Factor for GBR
 Depth         W3         W3         W3
Run Type=MODO
 D1000       1.03       1.77       1.05
 D2000       3.98       6.46       5.38
 D4000      15.79       20.7      25.58
 D8000      63.18      88.75      92.27
D16000      364.2     347.74     351.99
Run Type=MODB
 Depth         W3         W6        W12
 D1000        .27        .42        .27
 D2000          1       1.58       1.59
 D4000       3.86        3.8       6.19
 D8000      23.26      24.57      17.19
D16000      82.29      92.04      96.02
Run Type=RSFQ
 D1000       3.24       3.17       1.53
 D2000       8.58       9.68       8.02
 D4000      25.65      24.07      23.17
 D8000      111.3     108.25      98.33
D16000     471.17     407.65     399.99

Slice W3
The results show:

  • Quadratic variation of CPU time with number of items
  • Little variation of CPU time with number of bins, although RSFQ seems to show some decline
  • RSFQ is slightly slower than MODO, while my version of Model, MODB is about 4 times faster than MODO

Data Set 2 - Large
This was used for the following queries:

  • RSFT - Recursive Subquery Factor for GBR with Temporary Table
  • POSS - Plain Old SQL Solution for TPA
  • PLFN - Pipelined Function for GDY

This table gives the CPU times in seconds across the data set:

  Depth       W100      W1000     W10000
Run Type=PLFN
 D20000        .31       1.92      19.25
 D40000        .65       3.87      55.78
 D80000       1.28       7.72      92.83
D160000       2.67      16.59     214.96
D320000       5.29      38.68      418.7
D640000      11.61      84.57      823.9
Run Type=POSS
 D20000        .09        .13        .13
 D40000        .18        .21        .18
 D80000        .27        .36         .6
D160000        .74       1.07        .83
D320000       1.36       1.58       1.58
D640000       3.13       3.97       4.04
Run Type=RSFT
 D20000        .78        .78        .84
 D40000       1.41       1.54        1.7
 D80000       3.02       3.39       4.88
D160000       6.11       9.56       8.42
D320000      13.05      18.93      20.84
D640000      41.62      40.98      41.09

Slice W100

Slice W10000
The results show:

  • Linear variation of CPU time with number of items
  • Little variation of CPU time with number of bins for POSS and RSFT, but roughly linear variation for PLFN
  • These linear methods are much faster than the earlier quadratic ones for larger numbers of items
  • Its approximate proportionality of time to number of bins means that, while PLFN is faster than RSFT for small number of bins, it becomes slower from around 50 bins for our problem
  • The proportionality to number of bins for PLFN presumably arises from the step to find the bin of minimum value
  • The lack of proportionality to number of bins for RSFT may appear surprising since it performs a sort of the bins iteratively: However, while the work for this sort is likely to be proportional to the number of bins, the number of iterations is inversely proportional and thus cancels out the variation

Solution Quality

The methods reported above implement three underlying algorithms, none of which guarantees an optimal solution. In order to get an idea of how the quality compares, I created new versions of the second set of queries using analytic functions to output the difference between minimum and maximum bin values, with percentage of the maximum also output. I ran these on the same grid, and report below the results for the four corners.

Method:			PLFN		RSFT		POSS
Point:	W100/D20000
Diff/%:			72/.004%	72/.004%	19,825/1%
Point:	W100/D640000
Diff/%:			60/.000003%	60/.000003%	633499/.03%
Point:	W10000/D20000
Diff/%:			189/.9%		180/.9%		19,995/67%
Point:	W10000/D640000
Diff/%:			695/.003%	695/.003%	639,933/3%

The results indicate that GDY (Greedy Algorithm) and GBR (Greedy Algorithm with Batched Rebalancing) generally give very similar quality results, while TPA (Team Picking Algorithm) tends to be quite a lot worse.

Extended Problem: Finding the Number of Bins Required

An important extension to the problem is when the bins have fixed capacity, and it is desired to find the minimum number of bins, then spread the items evenly between them. As mentioned at the start, I posted extensions to two of my solutions on an OTN thread, and I reproduce them here. It turns out to be quite easy to make the extension. The remainder of this section is just lifted from my OTN post and refers to the table of the original poster.

Start OTN Extract
So how do we determine the number of bins? The total quantity divided by bin capacity, rounded up, gives a lower bound on the number of bins needed. The actual number required may be larger, but mostly it will be within a very small range from the lower bound, I believe (I suspect it will nearly always be the lower bound). A good practical solution, therefore, would be to compute the solutions for a base number, plus one or more increments, and this can be done with negligible extra work (although Model might be an exception, I haven't tried it). Then the bin totals can be computed, and the first solution that meets the constraints can be used. I took two bin sets here.

SQL POS

WITH items AS (
       SELECT sl_pm_code item_name, sl_wt item_amt, sl_qty item_qty,
              Ceil (Sum(sl_qty) OVER () / :MAX_QTY) n_bins
         FROM ow_ship_det
), items_desc AS (
       SELECT item_name, item_amt, item_qty, n_bins,
              Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins) bin_1,
              Mod (Row_Number () OVER (ORDER BY item_qty DESC), n_bins + 1) bin_2
         FROM items
)
SELECT item_name, item_amt, item_qty, 
       CASE bin_1 WHEN 0 THEN n_bins ELSE bin_1 END bin_1, 
       CASE bin_2 WHEN 0 THEN n_bins + 1 ELSE bin_2 END bin_2, 
       Sum (item_amt) OVER (PARTITION BY bin_1) bin_1_amt,
       Sum (item_qty) OVER (PARTITION BY bin_1) bin_1_qty,
       Sum (item_amt) OVER (PARTITION BY bin_2) bin_2_amt,
       Sum (item_qty) OVER (PARTITION BY bin_2) bin_2_qty
  FROM items_desc
 ORDER BY item_qty DESC, bin_1, bin_2

SQL Pipelined

SELECT osd.sl_pm_code item_name, osd.sl_wt item_amt, osd.sl_qty item_qty, 
       tab.bin_1, tab.bin_2, 
       Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_1) bin_1_amt,
       Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_1) bin_1_qty,
       Sum (osd.sl_wt) OVER (PARTITION BY tab.bin_2) bin_2_amt,
       Sum (osd.sl_qty) OVER (PARTITION BY tab.bin_2) bin_2_qty
  FROM ow_ship_det osd
  JOIN TABLE (Bin_Even.Items_Binned (
                     CURSOR (SELECT sl_pm_code item_name, sl_qty item_value,
                                    Sum(sl_qty) OVER () item_total
                               FROM ow_ship_det
                              ORDER BY sl_qty DESC, sl_wt DESC),
                     :MAX_QTY)) tab
    ON tab.item_name = osd.sl_pm_code
 ORDER BY osd.sl_qty DESC, tab.bin_1

Pipelined Function

CREATE OR REPLACE PACKAGE Bin_Even AS

TYPE bin_even_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, bin_1 NUMBER, bin_2 NUMBER);
TYPE bin_even_list_type IS VARRAY(1000) OF bin_even_rec_type;

TYPE bin_even_cur_rec_type IS RECORD (item_name VARCHAR2(100), item_value NUMBER, item_total NUMBER);
TYPE bin_even_cur_type IS REF CURSOR RETURN bin_even_cur_rec_type;

FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED;

END Bin_Even;
/
SHO ERR
CREATE OR REPLACE PACKAGE BODY Bin_Even AS

c_big_value                 CONSTANT NUMBER := 100000000;
c_n_bin_sets                CONSTANT NUMBER := 2;

TYPE bin_even_cur_list_type IS VARRAY(100) OF bin_even_cur_rec_type;
TYPE num_lol_list_type      IS VARRAY(100) OF SYS.ODCINumberList;

FUNCTION Items_Binned (p_items_cur bin_even_cur_type, p_bin_max NUMBER) RETURN bin_even_list_type PIPELINED IS

  l_min_bin                 SYS.ODCINumberList := SYS.ODCINumberList (1, 1);
  l_min_bin_val             SYS.ODCINumberList := SYS.ODCINumberList (c_big_value, c_big_value);
  l_bins                    num_lol_list_type := num_lol_list_type (SYS.ODCINumberList(), SYS.ODCINumberList());

  l_bin_even_cur_rec        bin_even_cur_rec_type;
  l_bin_even_rec            bin_even_rec_type;
  l_bin_even_cur_list       bin_even_cur_list_type;

  l_n_bins                  PLS_INTEGER;
  l_n_bins_base             PLS_INTEGER;
  l_is_first_fetch          BOOLEAN := TRUE;

BEGIN

  LOOP

    FETCH p_items_cur BULK COLLECT INTO l_bin_even_cur_list LIMIT 100;
    EXIT WHEN l_Bin_Even_cur_list.COUNT = 0;
    IF l_is_first_fetch THEN

      l_n_bins_base := Ceil (l_Bin_Even_cur_list(1).item_total / p_bin_max) - 1;

      l_is_first_fetch := FALSE;

      l_n_bins := l_n_bins_base;
      FOR i IN 1..c_n_bin_sets LOOP

        l_n_bins := l_n_bins + 1;
        l_bins(i).Extend (l_n_bins);
        FOR k IN 1..l_n_bins LOOP
          l_bins(i)(k) := 0;
        END LOOP;

      END LOOP;

    END IF;

    FOR j IN 1..l_Bin_Even_cur_list.COUNT LOOP

      l_bin_even_rec.item_name := l_bin_even_cur_list(j).item_name;
      l_bin_even_rec.item_value := l_bin_even_cur_list(j).item_value;
      l_bin_even_rec.bin_1 := l_min_bin(1);
      l_bin_even_rec.bin_2 := l_min_bin(2);

      PIPE ROW (l_bin_even_rec);

      l_n_bins := l_n_bins_base;
      FOR i IN 1..c_n_bin_sets LOOP
        l_n_bins := l_n_bins + 1;
        l_bins(i)(l_min_bin(i)) := l_bins(i)(l_min_bin(i)) + l_Bin_Even_cur_list(j).item_value;

        l_min_bin_val(i) := c_big_value;
        FOR k IN 1..l_n_bins LOOP

          IF l_bins(i)(k) < l_min_bin_val(i) THEN
            l_min_bin(i) := k;
            l_min_bin_val(i) := l_bins(i)(k);
          END IF;

        END LOOP;

      END LOOP;

    END LOOP;

  END LOOP;

END Items_Binned;

END Bin_Even;

Output POS
Note BIN_1 means bin set 1, which turns out to have 4 bins, while bin set 2 then necessarily has 5.

ITEM_NAME         ITEM_AMT   ITEM_QTY      BIN_1      BIN_2  BIN_1_AMT  BIN_1_QTY  BIN_2_AMT  BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080          4024        266          1          1      25562        995      17482        827
1239606-1045          1880        192          2          2      19394        886      14568        732
1239606-1044          1567        160          3          3      18115        835      14097        688
1239606-1081          2118        140          4          4      18988        793      17130        657
1239606-2094          5741         96          1          5      25562        995      18782        605
...
1239606-2107            80          3          4          2      18988        793      14568        732
1239606-2084           122          3          4          3      18988        793      14097        688
1239606-2110           210          2          2          3      19394        886      14097        688
1239606-4022           212          2          3          4      18115        835      17130        657
1239606-4021           212          2          4          5      18988        793      18782        605

Output Pipelined

ITEM_NAME         ITEM_AMT   ITEM_QTY      BIN_1      BIN_2  BIN_1_AMT  BIN_1_QTY  BIN_2_AMT  BIN_2_QTY
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
1239606-1080          4024        266          1          1      20627        878      15805        703
1239606-1045          1880        192          2          2      18220        877      16176        703
1239606-1044          1567        160          3          3      20425        878      15651        701
1239606-1081          2118        140          4          4      22787        876      14797        701
1239606-2094          5741         96          4          5      22787        876      19630        701
...
1239606-2089            80          3          4          1      22787        876      15805        703
1239606-2112           141          3          4          2      22787        876      16176        703
1239606-4022           212          2          1          1      20627        878      15805        703
1239606-4021           212          2          2          1      18220        877      15805        703
1239606-2110           210          2          3          2      20425        878      16176        703

End OTN Extract

Conclusions

  • Various solutions for the balanced number partitioning problem have been presented, using Oracle's Model clause, Recursive Subquery Factoring, Pipelined Functions and simple SQL
  • The performance characteristics of these solutions have been tested across a range of data sets
  • As is often the case, the best solution depends on the shape and size of the data set
  • A simple extension has been shown to allow determining the number of bins required in the bin-fitting interpretation of the problem
  • Replacing a WITH clause with a staging table can be a useful technique to allow indexed scans

Get the code here: Brendan's repo for interesting SQL






SQL for Network Grouping

I noticed an interesting question posted on OTN this (European) Saturday morning,
Hierarchical query to combine two groupings into one broad joint grouping. I quickly realised that the problem posed was an example of a very general class of network problems that arises quite often:

Given a set of nodes and a rule for pair-wise (non-directional) linking, obtain the set of implied networks

Usually, in response to such a problem someone will suggest a CONNECT BY query solution. Unfortunately, although hierarchical SQL techniques can be used theoretically to resolve these non-hierarchical networks, they tend to be extremely inefficient for networks of any size and are therefore often impractical. There are two problems in particular:

  1. Non-hierarchical networks have no root nodes, so the traversal needs to be repeated from every node in the network set
  2. Hierarchical queries retrieve all possible routes through a network

I illustrated the second problem in my last post, Notes on Profiling Oracle PL/SQL, and I intend to write a longer article on the subject of networks at a later date. The most efficient way to traverse generalised networks in Oracle involves the use of PL/SQL, such as in my Scribd article of June 2010, An Oracle Network Traversal PL SQL Program. For this article, though, I will stick to SQL-only techniques and will write down three solutions in a general format whereby the tables of the specific problem are read by initial subquery factors links_v and nodes_v that are used in the rest of the queries. I’ll save detailed explanation and performance analysis for the later article (see update at end of this section).

The three queries use two hierarchical methods and a method involving the Model clause:

  1. CONNECT BY: This is the least efficient
  2. Recursive subquery factors: This is more efficient than the first but still suffers from the two problems above
  3. Model clause: This is intended to bypass the performance problems of hierarchical queries, but is still slower than PL/SQL

[Update, 2 September 2015] I have since written two articles on related subjects, the first, PL/SQL Pipelined Function for Network Analysis describes a PL/SQL program that traverses all networks and lists their structure. The second, Recursive SQL for Network Analysis, and Duality, uses a series of examples to illustrate and explain the different characteristics of the first two recursive SQL methods.

Problem Definition
Data Structure
I have taken the data structure of the OTN poster, made all fields character, and added three more records comprising a second isolated node (10) and a subnetwork of nodes 08 and 09. ITEM_ID is taken to be the primary key.

SQL> SELECT *
  2    FROM item_groups
  3  /

ITEM_ID    GROUP1   GROUP2
---------- -------- --------
01         A        100
02         A        100
03         A        101
04         B        100
05         B        102
06         C        103
07         D        101
08         E        104
09         E        105
10         F        106

10 rows selected.

Grouping Structure
The poster defines two items to be linked if they share the same value for either GROUP1 or GROUP2 attributes (which could obviously be generalised to any number of attributes), and items are in the same group if they can be connected by a chain of links. Observe that if there were only one grouping attribute then the problem would be trivial as that would itself group the items. Having more than one makes it more interesting and more difficult.

A real world example of such networks can be seen to be sibling networks if one takes people as the nodes and father and mother as the attributes.

Network Diagram
Item Groups, v1.0
CONNECT BY Solution
SQL

WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
       t_to.item_id node_id_to,
       t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
  FROM item_groups t_fr
  JOIN item_groups t_to
    ON t_to.item_id > t_fr.item_id
   AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
 SELECT item_id node_id
   FROM item_groups
), tree AS (
SELECT link_id, CONNECT_BY_ROOT (link_id) root_id
  FROM links_v
CONNECT BY NOCYCLE (node_id_fr = PRIOR node_id_to OR node_id_to = PRIOR node_id_fr OR 
                     node_id_fr = PRIOR node_id_fr OR node_id_to = PRIOR node_id_to)
), group_by_link AS (
SELECT DISTINCT Min (root_id) OVER (PARTITION BY link_id) group_id, link_id
  FROM tree
), linked_nodes AS (
SELECT g.group_id, l.node_id_fr node_id
  FROM group_by_link g
  JOIN links_v l
    ON l.link_id = g.link_id
 UNION
SELECT g.group_id, l.node_id_to
  FROM group_by_link g
  JOIN links_v l
    ON l.link_id = g.link_id
)
SELECT l.group_id "Network", l.node_id "Node"
  FROM linked_nodes l
 UNION ALL
SELECT '00 (unlinked)', node_id
  FROM nodes_v n
 WHERE n.node_id NOT IN (SELECT node_id FROM linked_nodes)
ORDER BY 1, 2

Output

All networks by CONNECT BY - Unlinked nodes share network id 0
Network       Node
------------- ----
00 (unlinked) 06
              10
01-1          01
              02
              03
              04
              05
              07
08-1          08
              09

10 rows selected.

Notes on CONNECT BY Solution

  • For convenience I have grouped the unlinked nodes into one dummy network; it’s easy to assign them individual identifiers if desired

Recursive Subquery Factors (RSF) Solution
SQL

WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
       t_to.item_id node_id_to,
       t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
  FROM item_groups t_fr
  JOIN item_groups t_to
    ON t_to.item_id > t_fr.item_id
   AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
 SELECT item_id node_id
   FROM item_groups
), rsf (node_id, id, root_id) AS (
SELECT node_id, NULL, node_id
  FROM nodes_v
 UNION ALL
SELECT CASE WHEN l.node_id_to = r.node_id THEN l.node_id_fr ELSE l.node_id_to END, 
       l.link_id id, r.root_id
  FROM rsf r
  JOIN links_v l
    ON (l.node_id_fr = r.node_id OR l.node_id_to = r.node_id)
   AND l.link_id != Nvl (r.id, '0')
) CYCLE node_id SET is_cycle TO '*' DEFAULT ' '
SELECT DISTINCT Min (root_id) OVER (PARTITION BY node_id) "Network", node_id "Node"
  FROM rsf
 ORDER BY 1, 2

Output

All networks by RSF - Unlinked nodes have their own network ids
Network       Node
------------- ----
01            01
              02
              03
              04
              05
              07
06            06
08            08
              09
10            10

10 rows selected.

Notes on Recursive Subquery Factors (RSF) Solution

  • Here I have given the unlinked nodes their own network identifiers; they could equally have been grouped together under a dummy network

Model Clause Solution
SQL

WITH links_v AS (
SELECT t_fr.item_id node_id_fr,
       t_to.item_id node_id_to,
       t_fr.item_id || '-' || Row_Number() OVER (PARTITION BY t_fr.item_id ORDER BY t_to.item_id) link_id
  FROM item_groups t_fr
  JOIN item_groups t_to
    ON t_to.item_id > t_fr.item_id
   AND (t_to.group1 = t_fr.group1 OR t_to.group2 = t_fr.group2)
), nodes_v AS (
 SELECT item_id node_id
   FROM item_groups
), lnk_iter AS (
SELECT *
  FROM links_v
 CROSS JOIN (SELECT 0 iter FROM DUAL UNION SELECT 1 FROM DUAL)
), mod AS (
SELECT *
  FROM lnk_iter
MODEL
  DIMENSION BY (Row_Number() OVER (PARTITION BY iter ORDER BY link_id) rn, iter)
  MEASURES (Row_Number() OVER (PARTITION BY iter ORDER BY link_id) id_rn, link_id id,
        node_id_fr nd1, node_id_to nd2,
        1 lnk_cur,
        CAST ('x' AS VARCHAR2(100)) nd1_cur,
        CAST ('x' AS VARCHAR2(100)) nd2_cur,
        0 net_cur,
        CAST (NULL AS NUMBER) net,
        CAST (NULL AS NUMBER) lnk_prc,
        1 not_done,
        0 itnum)
  RULES UPSERT ALL
  ITERATE(100000) UNTIL (lnk_cur[1, Mod (iteration_number+1, 2)] IS NULL)
  (
    itnum[ANY, ANY] = iteration_number,
    not_done[ANY, Mod (iteration_number+1, 2)] = Count (CASE WHEN net IS NULL THEN 1 END)[ANY, Mod (iteration_number, 2)],
    lnk_cur[ANY, Mod (iteration_number+1, 2)] = 
        CASE WHEN not_done[CV(), Mod (iteration_number+1, 2)] > 0 THEN 
               Nvl (Min (CASE WHEN lnk_prc IS NULL AND net = net_cur THEN id_rn END)[ANY, Mod (iteration_number, 2)],
                    Min (CASE WHEN net IS NULL THEN id_rn END)[ANY, Mod (iteration_number, 2)])
        END,
    lnk_prc[ANY, Mod (iteration_number+1, 2)] = lnk_prc[CV(), Mod (iteration_number, 2)],
    lnk_prc[lnk_cur[1, Mod (iteration_number+1, 2)], Mod (iteration_number+1, 2)] = 1,
    net_cur[ANY, Mod (iteration_number+1, 2)] = 
        CASE WHEN Min (CASE WHEN lnk_prc IS NULL AND net = net_cur THEN id_rn END)[ANY, Mod (iteration_number, 2)] IS NULL THEN
               net_cur[CV(), Mod (iteration_number, 2)] + 1 
             ELSE 
               net_cur[CV(), Mod (iteration_number, 2)] 
        END,
    nd1_cur[ANY, Mod (iteration_number+1, 2)] = nd1[lnk_cur[CV(), Mod (iteration_number+1, 2)], Mod (iteration_number, 2)],
    nd2_cur[ANY, Mod (iteration_number+1, 2)] = nd2[lnk_cur[CV(), Mod (iteration_number+1, 2)], Mod (iteration_number, 2)],
    net[ANY, Mod (iteration_number+1, 2)] = 
        CASE WHEN (nd1[CV(),Mod (iteration_number+1, 2)] IN (nd1_cur[CV(),Mod (iteration_number+1, 2)], nd2_cur[CV(),Mod (iteration_number+1, 2)]) OR 
                   nd2[CV(),Mod (iteration_number+1, 2)] IN (nd1_cur[CV(),Mod (iteration_number+1, 2)], nd2_cur[CV(),Mod (iteration_number+1, 2)]))
                   AND net[CV(),Mod (iteration_number, 2)] IS NULL THEN
               net_cur[CV(),Mod (iteration_number+1, 2)]
             ELSE
               net[CV(),Mod (iteration_number, 2)]
        END
  )
)
SELECT To_Char (net) "Network", nd1 "Node"
  FROM mod
 WHERE not_done = 0
 UNION
SELECT To_Char (net), nd2
  FROM mod
 WHERE not_done = 0
 UNION ALL
SELECT '0 (unlinked)', node_id
  FROM nodes_v n
 WHERE n.node_id NOT IN (SELECT nd1 FROM mod WHERE nd1 IS NOT NULL UNION SELECT nd2 FROM mod WHERE nd2 IS NOT NULL)
ORDER by 1, 2

Output

All networks by Model - Unlinked nodes share network id 00
Network       Node
------------- ----
0 (unlinked)  06
              10
1             01
              02
              03
              04
              05
              07
2             08
              09

10 rows selected.

Notes on Model Clause Solution

  • For convenience I have grouped the unlinked nodes into one dummy network; it’s easy to assign them individual identifiers if desired
  • My Cyclic Iteration technique used here appears to be novel

Conclusions

  • It is always advisable with a new problem in SQL to consider whether it falls into a general class of problems for which solutions have already been found
  • Three solution methods for network resolution in pure SQL have been presented and demonstrated on a small test problem; the performance issues mentioned should be considered carefully before applying them on larger problems
  • The Model clause solution is likely to be the most efficient on larger, looped networks, but if better performance is required then PL/SQL recursion-based methods would be faster
  • For smaller problems with few loops the simpler method of recursive subquery factors may be preferred, or, for versions prior to v11.2, CONNECT BY






List Aggregation in Oracle – Comparing Three Methods

In my last article, Grouping by Unique Subsequences in Oracle, I compared three solutions to a querying problem in Oracle. I found that a solution using a pipelined function was fastest across a range of test data sets, while another using Oracle’s Model clause turned out to be extremely inefficient, and very unscaleable owing to quadratic variation in execution times.

I mentioned in the article the issue of possible poor cardinality estimates by Oracle’s Cost-Based Optimiser (CBO) for pipelined functions, referencing an article by Adrian Billington, setting cardinality for pipelined and table functions, that considers four techniques for improving these estimates.

In this article, I want to use another, very common, querying problem, to see in more detail how one of these techniques works, namely the dynamic sampling hint, and to compare the performance again of pipelined functions against the Model clause on a second example amenable to both methods.

In previous articles I have generally focussed on elapsed and CPU times to measure performance, but Oracle provides a range of metrics in instrumenting query execution. My benchmarking framework captures many of these, and we’ll use them to try to understand the performance variation, again using a 2-dimensional domain of test data. We also capture the execution plans used across the domain and display visually the changes across the domain.

The problem is that of list aggregation, and various solution methods are available depending on one’s Oracle version. In Oracle v11.2 a specific built-in function has been provided, Listagg, and Adrian Billington has compared it with earlier SQL techniques (listagg function in 11g release 2), including a Model solution. He looks only at pure SQL solutions, but we will take both the Model and Listagg solutions, and add in a pipelined function solution. We’ll take a simple test problem using Oracle’s demo HR schema, which will be: Return, for each department, its id, manager name, and a comma-separated, ordered list of its employee names.

Test Data
The HR tables employees and departments were copied structurally to test versions with _t suffixes and records were inserted programmatically by the performance testing packages.

ERD

Listagg Solution
How It Works
The first solution for this problem uses the aggregation function ListAgg, which is new in Oracle v11.2. The query groups employees by department, joins departments to get the name, and employees again to get the manager’s name.

Query Diagram

SQL

SELECT e.department_id,
       m.last_name manager,
       ListAgg (e.last_name, ',') WITHIN GROUP (ORDER BY e.last_name) emp_names
  FROM employees_t e
  JOIN departments_t d
    ON d.department_id = e.department_id
  JOIN employees_t m
    ON m.employee_id = d.manager_id
 GROUP BY 
       e.department_id,
       m.last_name
 ORDER BY e.department_id

Model Solution
How It Works

  1. Within an inline view, form the basic Select, with the department_id column, and append placeholders for the employee list and row number
  2. Add the Model keyword, partitioning by department_id, dimensioning by analytic function Row_Number, ordering by name within department, with name and name list as measures
  3. Define the only rule to prepend the list from the next element with the current name, going backwards (so the first record will be the one you want)
  4. Join the other tables to the inline view in the main query, strip off the last ‘,’, and filter out all except the first record for the department

Query Diagram

SQL

SELECT v.department_id, 
       e.last_name manager,
       RTrim (v.emp_names, ',') emp_names  
  FROM (
SELECT department_id,
       emp_names,
       rn
  FROM employees_t 
    MODEL  
      PARTITION BY (department_id)  
      DIMENSION BY (Row_Number() OVER 
                       (PARTITION BY department_id ORDER BY last_name) rn)
      MEASURES (last_name, CAST(NULL AS VARCHAR2(4000)) emp_names) 
      RULES (
        emp_names[ANY] ORDER BY rn DESC = last_name[CV()] || ',' || emp_names[CV()+1] 
      ) 
) v
  JOIN departments_t d
    ON d.department_id = v.department_id
  JOIN employees_t e
    ON e.employee_id = d.manager_id
 WHERE v.rn = 1 
 ORDER BY v.department_id

Pipelined Function Solution
How It Works
This approach is based on pipelined database functions, which are specified to return array types. Pipelining means that Oracle transparently returns the records in batches while processing continues, thus avoiding memory problems, and returning initial rows more quickly. Within the function there is a simple cursor loop over the employees, joining departments to get the manager id. A string variable accumulates the list of employees, until the department changes, when the record is piped out, and the string reset to the new employee. The last record has to be piped after exiting the loop.

Types
Two database types are specified, the first being an object with fields for the department and manager ids and the employee name list; the second is an array of the nested table form with elements of the first type.

Function Pseudocode

Loop over a cursor selecting the records in order 
    If the department changes or first record then
        If the department changes then
            Pipe the row out
        End if
        Reset variables to current record values
    Else
        Append the current employee name to the name list
    End if
End loop
If the last department is not null then
    Pipe the row out using saved values
End if

SQL
Just select the fields named in the record from the function wrapped in the TABLE keyword, and join employees to get the manager name.

Query Diagram

Function Definition (within package)

FUNCTION Dep_Emps RETURN dep_emps_list_type PIPELINED IS

  l_emp_names	        VARCHAR2(4000);
  old_manager_id        PLS_INTEGER;
  old_department_id     PLS_INTEGER;

BEGIN

  FOR r_val IN (SELECT e.department_id, d.manager_id, e.last_name 
                  FROM employees_t e
                  JOIN departments_t d
                    ON d.department_id = e.department_id
                 ORDER BY e.department_id, e.last_name) LOOP

    IF r_val.department_id != old_department_id OR old_department_id IS NULL THEN

      IF r_val.department_id != old_department_id THEN

        PIPE ROW (dep_emps_type (r_val.department_id, r_val.manager_id, l_emp_names));

      END IF;
      old_department_id := r_val.department_id;
      old_manager_id := r_val.manager_id;
      l_emp_names := r_val.last_name;

    ELSE

      l_emp_names := l_emp_names || ',' || r_val.last_name;

    END IF;

  END LOOP;

  IF old_department_id IS NOT NULL THEN
    PIPE ROW (dep_emps_type (old_department_id, old_manager_id, l_emp_names));
  END IF;

END Dep_Emps;

SQL

SELECT d.department_id,
       e.last_name manager,
       d.emp_names
  FROM TABLE (Stragg.Dep_Emps) d
  JOIN employees_t e
    ON e.employee_id = d.manager_id
 ORDER BY d.department_id

Performance Analysis
As in the previous article, I have benchmarked across a 2-dimensional domain, in this case width being the number of employees per department, and depth the number of departments. For simplicity, a single department, ‘Accounting’, and employee, ‘John Chen’, were used as templates and inserted repeatedly with suffixes on names and new ids.

The three queries above were run on all data points, and in addition the function query was run with a hint: DYNAMIC_SAMPLING (d 5).

Record Counts (total employees and employees per department)

Timings

In the embedded Excel file above, the four solutions are labelled as follows:

  • F = Pipelined Function solution
  • D = Pipelined Function with Dynamic Sampling hint solution
  • L = Listagg solution
  • M = Model solution

For the data points, font colour and fill colour signify:

  • Fill colours correspond to distinct execution plans whose hash values can be found later in the same tab. The formatted outputs can be found in the Plans tab for all distinct plans for selected data points (with hyperlinks)
  • White font signifies that the smallest elapsed time for the given data point occurred for the given solution, for all the tables except CPU time
  • For the CPU time table higher in the tab, white font signifies that the smallest CPU time for the given data point occurred for the given solution
  • Red font indicates that the solution incurred more than 500 disk reads (see the disk reads table later in the tab)

Graphs

Comparison
The CPU time for all four queries increases approximately in proportion with either width or depth dimension when the other is fixed, which is not surprising. The elapsed times are very similar to the CPU times for all except the larger problems using Model, which we’ll discuss in a later section. For the largest data point, the times rank in the following order:

The following points can be made:

  • The function query without the dynamic sampling hint was fastest for the largest data point, and by a significant margin over the next best, Listagg
  • The earlier detailed tables show that this was also true for the triangle of data points starting three points back in each dimension, indicating that this is the case once the problem size gets large enough
  • Similarly, the function query with dynamic sampling was in third place for all these larger problems
  • Model was always the slowest query, generally by a factor of about 8 over the fastest in terms of CPU time, but very much worse in elapsed time for problems above a certain size, and we’ll look at this in more detail next.

Model Performance Discontinuity
In Adrian Billington’s article mentioned above (listagg function in 11g release 2), he took a single large-ish data point for his problem and found that his Model query took 308 seconds compared with 6 seconds for Listagg. He puts the Model performance down to ‘an enormous number of direct path reads/writes to/from the temporary tablespace‘. In my last article, I also quoted the anonymous author of MODEL Performance Tuning: ‘In some cases MODEL query performance can even be so poor that it renders the query unusable. One of the keys to writing efficient and scalable MODEL queries seems to be keeping session memory use to a minimum’.

My benchmarking framework records the metrics from the view v$sql_plan_stats_all, which is used by DBMS_XPlan.Display_Cursor to write the execution plan, and prints to a CSV file aggregates over the plan of several of them, for example: Max (last_disk_reads). These are are shown for the Model solution in the tab displayed below of the embedded Excel file (the next tab has 3-d graphs but they may not display in a browser).

The tables show that memory increases with the problem size in each direction up to a maximum, at which points the number of disk reads jumps from a very low level and then rises with problem size. The maximum memory points have red font. These transitions represent discontinuities where there is a jump in the elapsed times, although CPU times continue to rise smoothly. So while Model is always slower than the other solutions, its much greater use of memory causes the discrepancy to increase dramatically when the processing spills to disk, which is consistent with, and extends, the observations of the authors mentioned.

Model and Listagg Cardinality Estimates
Here is the execution plan for the last data point:

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   1024 |00:01:25.46 |    5798 |    164K|    164K|       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |    260K|   1024 |00:01:25.46 |    5798 |    164K|    164K|  2320K|   704K| 2062K (0)|         |
|*  2 |   HASH JOIN             |               |      1 |    260K|   1024 |00:01:20.86 |    5798 |    164K|    164K|   909K|   909K| 1230K (0)|         |
|*  3 |    HASH JOIN            |               |      1 |   1024 |   1024 |00:00:00.11 |    2901 |      0 |      0 |   935K|   935K| 1228K (0)|         |
|   4 |     TABLE ACCESS FULL   | DEPARTMENTS_T |      1 |   1024 |   1024 |00:00:00.01 |       6 |      0 |      0 |       |       |          |         |
|   5 |     TABLE ACCESS FULL   | EMPLOYEES_T   |      1 |    260K|    262K|00:00:00.40 |    2895 |      0 |      0 |       |       |          |         |
|*  6 |    VIEW                 |               |      1 |    260K|   1024 |00:01:20.68 |    2897 |    164K|    164K|       |       |          |         |
|   7 |     BUFFER SORT         |               |      1 |    260K|    262K|00:01:19.56 |    2897 |    164K|    164K|   297M|  5726K|   45M (0)|     265K|
|   8 |      SQL MODEL ORDERED  |               |      1 |    260K|    262K|01:28:30.86 |    2895 |    131K|    131K|  1044M|    28M|   51M (1)|         |
|   9 |       WINDOW SORT       |               |      1 |    260K|    262K|00:00:01.01 |    2895 |      0 |      0 |  7140K|  1067K| 6346K (0)|         |
|  10 |        TABLE ACCESS FULL| EMPLOYEES_T   |      1 |    260K|    262K|00:00:00.50 |    2895 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("D"."DEPARTMENT_ID"="V"."DEPARTMENT_ID")
   3 - access("E"."EMPLOYEE_ID"="D"."MANAGER_ID")
   6 - filter("V"."RN"=1)

Notice that at line 6 the cardinality estimate is 260K while the actual rows returned was 1024: The CBO has simply ignored the filtering down to department level by row number, and assumed the number of employees! We may ask whether this mis-estimate has affected the subsequent plan. Well the result set at line 6 makes the second step in a hash join to another row set of actual cardinality 1024, so probably it has not made a significant difference in this case. It’s worth comparing the execution plan for Listagg:

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |               |      1 |        |   1024 |00:00:02.11 |    5796 |       |       |          |
|   1 |  SORT GROUP BY       |               |      1 |    185K|   1024 |00:00:02.11 |    5796 |    15M|  1999K|   14M (0)|
|*  2 |   HASH JOIN          |               |      1 |    260K|    262K|00:00:02.17 |    5796 |   921K|   921K| 1195K (0)|
|*  3 |    HASH JOIN         |               |      1 |   1024 |   1024 |00:00:00.11 |    2901 |   935K|   935K| 1229K (0)|
|   4 |     TABLE ACCESS FULL| DEPARTMENTS_T |      1 |   1024 |   1024 |00:00:00.01 |       6 |       |       |          |
|   5 |     TABLE ACCESS FULL| EMPLOYEES_T   |      1 |    260K|    262K|00:00:00.41 |    2895 |       |       |          |
|   6 |    TABLE ACCESS FULL | EMPLOYEES_T   |      1 |    260K|    262K|00:00:00.46 |    2895 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("D"."DEPARTMENT_ID"="E"."DEPARTMENT_ID")
   3 - access("M"."EMPLOYEE_ID"="D"."MANAGER_ID")

Notice that the cardinality estimates are accurate apart from at line 1 for the Sort Group By, where a similar error has been made in not allowing for the reduction in rows caused by the grouping. Again it doesn’t seem to have affected the plan adversely.

Memory Usage and Buffers
A few observations can be made about these statistics:

  • Buffers is sometimes seen as a good, fundamental measure of performance, but we can see that both function solutions have figures of about 9K, while the other two have very similar figures of about 6K, and these do not correlate well with actual time performance here
  • The buffers figure for Model at the minimum data point is more than half that for the maximum, unlike the other solutions where it is 1-3%. The ratio of records is only 0.2%, so this is hard to understand
  • Similarly, the memory usage for Model starts very high, 3.1M, before rising to its maximum of 54M. For pipelined functions the figures go from 6K to 2.8M, and for Listagg from 29K to 15M

Dynamic Sampling Effects
Here is the execution plan for the pipelined function solution at point (W256-D512), with my own timing output from ‘Timer Set…’ on:

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |             |      1 |        |    512 |00:00:00.65 |    6097 |       |       |          |
|   1 |  SORT ORDER BY                      |             |      1 |   8168 |    512 |00:00:00.65 |    6097 |  1186K|   567K| 1054K (0)|
|*  2 |   HASH JOIN                         |             |      1 |   8168 |    512 |00:00:00.62 |    6097 |  1520K|   901K| 1706K (0)|
|   3 |    COLLECTION ITERATOR PICKLER FETCH| DEP_EMPS    |      1 |   8168 |    512 |00:00:00.54 |    3202 |       |       |          |
|   4 |    TABLE ACCESS FULL                | EMPLOYEES_T |      1 |    130K|    131K|00:00:00.20 |    2895 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("E"."EMPLOYEE_ID"=VALUE(KOKBF$))

Timer Set: Cursor, Constructed at 23 Sep 2012 07:49:43, written at 07:49:45
===========================================================================
[Timer timed: Elapsed (per call): 0.05 (0.000045), CPU (per call): 0.04 (0.000040), calls: 1000, '***' denotes corrected line below]

Timer                  Elapsed          CPU          Calls        Ela/Call        CPU/Call
-----------------   ----------   ----------   ------------   -------------   -------------
Open cursor               0.01         0.00              1         0.01000         0.00000
First fetch               0.65         0.63              1         0.64700         0.63000
Write to file             0.06         0.06              2         0.03050         0.03000
Remaining fetches         0.00         0.00              1         0.00000         0.00000
Write plan                0.64         0.61              1         0.64200         0.61000
(Other)                   0.11         0.05              1         0.11400         0.05000
-----------------   ----------   ----------   ------------   -------------   -------------
Total                     1.47         1.35              7         0.21057         0.19286
-----------------   ----------   ----------   ------------   -------------   -------------

Here is the output for the pipelined function solution with dynamic sampling at the same point (W256-D512):

-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name        | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |             |      1 |        |    512 |00:00:00.59 |    4228 |       |       |          |
|   1 |  SORT ORDER BY                       |             |      1 |    512 |    512 |00:00:00.59 |    4228 |  1186K|   567K| 1054K (0)|
|   2 |   NESTED LOOPS                       |             |      1 |        |    512 |00:00:00.59 |    4228 |       |       |          |
|   3 |    NESTED LOOPS                      |             |      1 |    512 |    512 |00:00:00.58 |    3716 |       |       |          |
|   4 |     COLLECTION ITERATOR PICKLER FETCH| DEP_EMPS    |      1 |    512 |    512 |00:00:00.56 |    3202 |       |       |          |
|*  5 |     INDEX UNIQUE SCAN                | EMP_PK      |    512 |      1 |    512 |00:00:00.01 |     514 |       |       |          |
|   6 |    TABLE ACCESS BY INDEX ROWID       | EMPLOYEES_T |    512 |      1 |    512 |00:00:00.01 |     512 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("E"."EMPLOYEE_ID"=VALUE(KOKBF$))

Note
-----
   - dynamic sampling used for this statement (level=2)

Timer Set: Cursor, Constructed at 23 Sep 2012 07:49:45, written at 07:49:47
===========================================================================
[Timer timed: Elapsed (per call): 0.05 (0.000045), CPU (per call): 0.05 (0.000050), calls: 1000, '***' denotes corrected line below]

Timer                  Elapsed          CPU          Calls        Ela/Call        CPU/Call
-----------------   ----------   ----------   ------------   -------------   -------------
Open cursor               0.55         0.55              1         0.54500         0.55000
First fetch               0.59         0.57              1         0.59300         0.57000
Write to file             0.06         0.06              2         0.03100         0.03000
Remaining fetches         0.00         0.00              1         0.00000         0.00000
Write plan                0.59         0.58              1         0.59300         0.58000
(Other)                   0.06         0.03              1         0.05800         0.03000
-----------------   ----------   ----------   ------------   -------------   -------------
Total                     1.85         1.79              7         0.26443         0.25571
-----------------   ----------   ----------   ------------   -------------   -------------

Notice that in the plan without dynamic sampling the cardinality estimate for the row set returned from the function is 8168, nearly 8 times the actual rows. When dynamic sampling is added the cardinality estimate is exactly right. Also the times reported in the plan are similar but show dynamic sampling to be faster. How can that be, when the table of results earlier showed the query with dynamic sampling taking nearly twice as long?

My framework breaks down the times for the steps in running the query and writing the results out. From these we see that the difference is largely accounted for by the function query taking only 0.01 seconds to open the cursor while with dynamic sampling it takes .55 seconds. Evidently the dynamic sampling hint is causing a call to the function at the query parsing stage which is not accounted for in the execution plan statistics. (Notice incidentally that Oracle is apparaently performing a deferred open in both cases, where the actual cursor opening is performed at the time of first fetching.)

If one looks at the colour-coded table of elapsed times above, it can be seen that dynamic sampling causes a single plan to be chosen for all data points with depth below 1024, while without the hint two plans are chosen that with dynamic sampling are chosen only at depth 1024. It can also be seen that the dynamic sampling version is faster overall in most cases, but at the highest depth is slower because the non-hinted plan is the same. The default cardinality estimate (8168) was too high until that point.

Conclusions
We have applied three techniques for list aggregation to one specific problem, and for that problem the following concluding remarks can be made:

  • The Model solution is much inferior in performance to both the new Listagg native function, and custom pipelined function solutions
  • The variation in performance has been analysed and for the model solution shown to become dramatically worse when problem size causes earlier in-memory processing to spill to disk
  • The dynamic sampling hint has been applied to the pipelined function solution and shown to give correct cardinality estimates. In our example, the performance effect was negative for the largest problem sizes owing to the overhead involved, but in other cases it was positive
  • We have oberved that both Model and Listagg solutions also have cardinality estimation problems, but have not analysed these further
  • The native Listagg solution is significantly, and surprisingly, slower than the custom pipelined function solution at the largest data points considered

We may say more generally:

  • Performance analysis across a 2-dimensional domain of data sets can provide more insight than just looking at one large zero-dimensional case
  • Microsoft Excel graphs and other functionality have proved very useful in helping to visualise what is happening in terms of performance
  • Our findings tend to bear out a common suspicion of Model clause on performance grounds, and further support the view that pipelined functions can be very performance-effective, used appropriately