Dimensional Benchmarking of SQL for Fixed-Depth Hierarchies

In my recent article,Dimensional Benchmarking of Bracket Parsing SQL I showed how it was much more efficient to to solve a particular database querying problem using a database function than by two other pure SQL methods. I have also written articles using recursive PL/SQL functions to traverse network or hierarchical data structures, such as PL/SQL Pipelined Function for Network Analysis.

Networks or hierarchies of arbitrary depth are difficult to traverse in SQL without using recursion. However, there also exist hierarchies of fixed and fairly small depths, and these can be traversed either recursively or by a sequence of joins for each of the levels. In this article I compare the performance characteristics of three traversal methods, two recursive and one non-recursive, using my own benchmarking package (A Framework for Dimensional Benchmarking of SQL Performance), on a test problem of a fixed level organization structure hierarchy, with 5 levels for performance testing and 3 levels for functional testing.

The three queries tested were:

  • JNS_QRY: Sequence of joins
  • PLF_QRY: Recursive pipelined function
  • RSF_QRY: Recursive subquery factors

Fixed Level Hierarchy Problem Definition

A hierarchy is assumed in which there are a number of root records, and at each level a parent can have multiple child records and a child can also have multiple parents. Each level in the hierarchy corresponds to an entity of a particular type. Each parent-child record is associated with a numerical factor, and the products of these propagate down the levels.

The problem considered is to report all root/leaf combinations with their associated products. There may of course be multiple paths between any root and leaf, and in a real world example one would likely want to aggregate. However, in order to keep it simple and focus on the traversal performance, I do not perform any aggregation.

Test Data Structure

Tables

CREATE TABLE orgs ( id              NUMBER NOT NULL, 
                    org_level       NUMBER NOT NULL, 
                    org_name        VARCHAR2(100) NOT NULL,
                    CONSTRAINT      org_pk PRIMARY KEY (id))
/
DROP TABLE org_structure
/
CREATE TABLE org_structure (
                    id              NUMBER NOT NULL, 
                    struct_level    NUMBER NOT NULL, 
                    org_id          NUMBER NOT NULL, 
                    child_org_id    NUMBER NOT NULL,
                    fact            NUMBER,
                    CONSTRAINT      ost_pk PRIMARY KEY (id))
/
CREATE INDEX ost_N1 ON org_structure (org_id)
/
CREATE INDEX ost_N2 ON org_structure (child_org_id)
/

Functional Test Data

To simplify functional validation a 3-level hierarchy was taken, with a relatively small number of records. The functional test data were generated by the same automated approach used for performance testing. The fact number was obtained as a random number betwee 0 and 1, and to keep it simple, duplicate pairs were permitted.

The test data were parametrised by width and depth as follows (the exact logic is a little complicated, but can be seen in the code itself):

  • Width corresponds to a percentage increase in the number of child entities relative to the number of parents
  • Depth corresponds to the proportion of the parent entity records a child is (randomly) assigned. Each child has a minimum of 1 parent (lowest depth), and a maximum of all parent entities (highest depth)
Test Data

orgs

        ID  ORG_LEVEL ORG_NAME
---------- ---------- ----------------------------------------------------------------------------------------------------
         1          1 L1 Org 1
         2            L1 Org 2
         3            L1 Org 3
         4          2 L2 Org 1
         5            L2 Org 2
         6            L2 Org 3
         7            L2 Org 4
         8            L2 Org 5
         9            L2 Org 6
        10          3 L3 Org 1
        11            L3 Org 2
        12            L3 Org 3
        13            L3 Org 4
        14            L3 Org 5
        15            L3 Org 6
        16            L3 Org 7
        17            L3 Org 8
        18            L3 Org 9
        19            L3 Org 10
        20            L3 Org 11
        21            L3 Org 12

21 rows selected.

org_structure

        ID STRUCT_LEVEL     ORG_ID CHILD_ORG_ID       FACT
---------- ------------ ---------- ------------ ----------
        25            1          2            4 .765337854
        26                       1            5 .157198428
        27                       2            6 .012739872
        28                       3            7  .75268798
        29                       2            8 .647269295
        30                       2            9 .972586624
         1            2          6           10 .290389829
         2                       7           10 .717844734
         3                       6           11 .909068079
         4                       7           11 .876644977
         5                       9           12  .93576597
         6                       6           12 .097462542
         7                       8           13 .316926046
         8                       8           13 .169842496
         9                       6           14 .765946795
        10                       4           14 .831552357
        11                       8           15 .110940017
        12                       7           15 .295163716
        13                       5           16 .171097557
        14                       5           16 .827432202
        15                       7           17 .339382023
        16                       7           17 .644889466
        17                       7           18 .955594058
        18                       5           18 .668546163
        19                       7           19 .785709973
        20                       6           19 .507321616
        21                       8           20 .511548918
        22                       7           20 .523510327
        23                       6           21 .242612715
        24                       5           21 .561006179

30 rows selected.

Result

ROOT_ORG   LEAF_ORG   FACT_PRODUCT
---------- ---------- ------------
L1 Org 1   L3 Org 12          0.09
L1 Org 1   L3 Org 7           0.03
L1 Org 1   L3 Org 7           0.13
L1 Org 1   L3 Org 9           0.11
L1 Org 2   L3 Org 1           0.00
L1 Org 2   L3 Org 10          0.01
L1 Org 2   L3 Org 11          0.33
L1 Org 2   L3 Org 12          0.00
L1 Org 2   L3 Org 2           0.01
L1 Org 2   L3 Org 3           0.00
L1 Org 2   L3 Org 3           0.91
L1 Org 2   L3 Org 4           0.11
L1 Org 2   L3 Org 4           0.21
L1 Org 2   L3 Org 5           0.01
L1 Org 2   L3 Org 5           0.64
L1 Org 2   L3 Org 6           0.07
L1 Org 3   L3 Org 1           0.54
L1 Org 3   L3 Org 10          0.59
L1 Org 3   L3 Org 11          0.39
L1 Org 3   L3 Org 2           0.66
L1 Org 3   L3 Org 6           0.22
L1 Org 3   L3 Org 8           0.26
L1 Org 3   L3 Org 8           0.49
L1 Org 3   L3 Org 9           0.72

24 rows selected.

All queries returned the expected results above.

Performance Test Data

The performance test data were created in the same way as the functional test data, but with 5 levels, and with 10 root organizations.

The test data sets used a grid of width and depth values of (100, 120, 140, 160, 180), which resulted in output records as below:

Sequence of joins (JNS_QRY)

SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       s1.fact * s2.fact * s3.fact * s4.fact fact_product
  FROM org_structure s1
  JOIN org_structure s2
    ON s2.org_id = s1.child_org_id
  JOIN org_structure s3
    ON s3.org_id = s2.child_org_id
  JOIN org_structure s4
    ON s4.org_id = s3.child_org_id
  JOIN orgs o1
    ON o1.id = s1.org_id
  JOIN orgs o5
    ON o5.id = s4.child_org_id
 WHERE s1.struct_level = 1
 ORDER BY o1.org_name, o5.org_name, s1.fact * s2.fact * s3.fact * s4.fact

Plan hash value: 914261573

----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |               |      1 |        |   3330K|00:00:11.12 |     718 |  51157 |  51157 |       |       |          |         |
|   1 |  SORT ORDER BY          |               |      1 |   3905K|   3330K|00:00:11.12 |     718 |  51157 |  51157 |   454M|  7031K|  163M (1)|     400K|
|*  2 |   HASH JOIN             |               |      1 |   3905K|   3330K|00:00:01.76 |     714 |      0 |      0 |  1483K|  1483K| 1524K (0)|         |
|   3 |    TABLE ACCESS FULL    | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN            |               |      1 |   3905K|   3330K|00:00:00.45 |     707 |      0 |      0 |  2733K|  1562K| 4103K (0)|         |
|   5 |     TABLE ACCESS FULL   | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|*  6 |     HASH JOIN           |               |      1 |    133K|  30520 |00:00:00.02 |     532 |      0 |      0 |   917K|   917K| 3834K (0)|         |
|*  7 |      HASH JOIN          |               |      1 |   4575 |    780 |00:00:00.01 |     357 |      0 |      0 |  1062K|  1062K| 1260K (0)|         |
|*  8 |       HASH JOIN         |               |      1 |     56 |     56 |00:00:00.01 |     182 |      0 |      0 |  1185K|  1185K| 1184K (0)|         |
|*  9 |        TABLE ACCESS FULL| ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|  10 |        TABLE ACCESS FULL| ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|  11 |       TABLE ACCESS FULL | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|  12 |      TABLE ACCESS FULL  | ORG_STRUCTURE |      1 |  27288 |  27288 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="S4"."CHILD_ORG_ID")
   4 - access("S4"."ORG_ID"="S3"."CHILD_ORG_ID")
   6 - access("S3"."ORG_ID"="S2"."CHILD_ORG_ID")
   7 - access("S2"."ORG_ID"="S1"."CHILD_ORG_ID")
   8 - access("O1"."ID"="S1"."ORG_ID")
   9 - filter("S1"."STRUCT_LEVEL"=1)

Note
-----
   - this is an adaptive plan

Notes on JNS_QRY

It is interesting to note that all joins in the execution plan are hash joins, and in the sequence you would expect. The first three are in the default join ‘sub-order’ that defines whether the joined table or the prior rowset (the default) is used to form the hash table, while the last two are in the reverse order, corresponding to the swap_join_inputs hint. I wrote a short note on that subject, A Note on Oracle Join Orders and Hints, last year, and have now written an article using the largest data point in the current problem to explore performance variation across the possible sub-orders.

Recursive pipelined function (PLF_QRY)

CREATE OR REPLACE TYPE org_struct_rec_type IS OBJECT (struct_level NUMBER, org_id NUMBER, fact_product NUMBER);
/
CREATE TYPE org_struct_lis_type IS VARRAY(32767) OF org_struct_rec_type;
/
CREATE OR REPLACE FUNCTION Org_Products (p_org_id PLS_INTEGER, p_fact_product NUMBER) RETURN org_struct_lis_type PIPELINED IS
  l_org_struct_lis  org_struct_lis_type;
BEGIN

  FOR rec_org_struct IN (
      SELECT child_org_id,
             p_fact_product * fact fact_product,
             struct_level
      FROM org_structure
      WHERE org_id = p_org_id) LOOP

    PIPE ROW (org_struct_rec_type (rec_org_struct.struct_level, rec_org_struct.child_org_id, rec_org_struct.fact_product));

    FOR rec_org_struct_child IN (SELECT struct_level, org_id, fact_product FROM TABLE (Org_Products (rec_org_struct.child_org_id, rec_org_struct.fact_product))) LOOP

      PIPE ROW (org_struct_rec_type (rec_org_struct_child.struct_level, rec_org_struct_child.org_id, rec_org_struct_child.fact_product));

    END LOOP;

  END LOOP;

END  Org_Products;

SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       t.fact_product fact_product
  FROM org_structure s
  CROSS APPLY TABLE (Org_Products (s.child_org_id, s.fact)) t
  JOIN orgs o1
    ON o1.id = s.org_id
  JOIN orgs o5
    ON o5.id = t.org_id
 WHERE s.struct_level = 1
   AND t.struct_level = 4
 ORDER BY o1.org_name, o5.org_name, t.fact_product

Plan hash value: 1216100769

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |               |      1 |        |   3330K|00:03:38.10 |    9072K|  51163 |  51163 |       |       |          |         |
|   1 |  SORT ORDER BY                       |               |      1 |   4574 |   3330K|00:03:38.10 |    9072K|  51163 |  51163 |   455M|  7037K|  163M (1)|     400K|
|*  2 |   HASH JOIN                          |               |      1 |   4574 |   3330K|00:03:15.00 |    9072K|      0 |      0 |  1483K|  1483K| 1489K (0)|         |
|   3 |    TABLE ACCESS FULL                 | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|   4 |    NESTED LOOPS                      |               |      1 |   4574 |   3330K|00:03:13.28 |    9072K|      0 |      0 |       |       |          |         |
|*  5 |     HASH JOIN                        |               |      1 |     56 |     56 |00:00:00.01 |     182 |      0 |      0 |  1160K|  1160K| 1199K (0)|         |
|*  6 |      TABLE ACCESS FULL               | ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|   7 |      TABLE ACCESS FULL               | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  8 |     COLLECTION ITERATOR PICKLER FETCH| ORG_PRODUCTS  |     56 |     82 |   3330K|00:03:12.84 |    9072K|      0 |      0 |       |       |          |         |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"=VALUE(KOKBF$))
   5 - access("O1"."ID"="S"."ORG_ID")
   6 - filter("S"."STRUCT_LEVEL"=1)
   8 - filter(VALUE(KOKBF$)=4)

Outer Loop Function Query - Explan Plan only

SELECT child_org_id,
       fact fact_product,
       struct_level
  FROM org_structure
  WHERE org_id = 1

Query Plan
---------------------------------------------------
SELECT STATEMENT   Cost = 41
  TABLE ACCESS BY INDEX ROWID BATCHED ORG_STRUCTURE
    INDEX RANGE SCAN OST_N1

Notes on PLF_QRY

For simplicity a stand-alone database function was used here. The query execution plan was obtained by the benchmarking framework and the highest data point plan listed. The query within the function was extracted and an explain Plan performed manually, which showed the expected index range scan.

Recursive subquery factors (RSF_QRY)

WITH rsf (root_org_id, child_org_id, fact_product, lev) AS
(
SELECT org_id, child_org_id, fact, 1
  FROM org_structure
 WHERE struct_level = 1
UNION ALL
SELECT r.root_org_id,
       s.child_org_id,
       r.fact_product * s.fact,
       r.lev + 1
  FROM rsf r
  JOIN org_structure s
    ON s.org_id = r.child_org_id
)
SELECT o1.org_name root_org,
       o5.org_name leaf_org,
       r.fact_product fact_product
  FROM rsf r
  JOIN orgs o1
    ON o1.id = r.root_org_id
  JOIN orgs o5
    ON o5.id = r.child_org_id
 WHERE r.lev = 4
 ORDER BY o1.org_name, o5.org_name, r.fact_product

Plan hash value: 248371385

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name          | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |               |      1 |        |   3330K|00:00:55.92 |      39M|  73843 |  93808 |       |       |          |         |
|   1 |  SORT ORDER BY                               |               |      1 |   4631 |   3330K|00:00:55.92 |      39M|  73843 |  93808 |   454M|  7030K|  162M (1)|     400K|
|*  2 |   HASH JOIN                                  |               |      1 |   4631 |   3330K|00:00:45.06 |      39M|  22678 |  42643 |  1519K|  1519K| 1555K (0)|         |
|   3 |    TABLE ACCESS FULL                         | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  4 |    HASH JOIN                                 |               |      1 |   4631 |   3330K|00:00:43.14 |      39M|  22678 |  42643 |  1519K|  1519K| 1546K (0)|         |
|   5 |     TABLE ACCESS FULL                        | ORGS          |      1 |    944 |    944 |00:00:00.01 |       7 |      0 |      0 |       |       |          |         |
|*  6 |     VIEW                                     |               |      1 |   4631 |   3330K|00:00:41.58 |      39M|  22678 |  42643 |       |       |          |         |
|   7 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|               |      1 |        |   3361K|00:00:40.72 |      39M|  22678 |  42643 |   173M|  4426K|   97M (0)|         |
|*  8 |       TABLE ACCESS FULL                      | ORG_STRUCTURE |      1 |     56 |     56 |00:00:00.01 |     175 |      0 |      0 |       |       |          |         |
|*  9 |       HASH JOIN                              |               |      4 |   4575 |   3361K|00:00:06.43 |     701 |  22678 |  22935 |   282M|    10M|   56M (1)|     191K|
|  10 |        RECURSIVE WITH PUMP                   |               |      4 |        |   3361K|00:00:00.56 |       1 |  19708 |      0 |       |       |          |         |
|  11 |        TABLE ACCESS FULL                     | ORG_STRUCTURE |      4 |  27288 |    109K|00:00:00.02 |     700 |      0 |      0 |       |       |          |         |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("O5"."ID"="R"."CHILD_ORG_ID")
   4 - access("O1"."ID"="R"."ROOT_ORG_ID")
   6 - filter("R"."LEV"=4)
   8 - filter("STRUCT_LEVEL"=1)
   9 - access("S"."ORG_ID"="R"."CHILD_ORG_ID")

Note
-----
   - this is an adaptive plan

Notes on RSF_QRY

This uses a v11.2 feature.

Performance Testing Results

Deep Slice Elapsed Times [d=180, w=(100, 120, 140, 160, 180)]

  • JNS_QRY is faster than RSF_QRY, which is faster than PLF_QRY at all data points
  • PLF_QRY tracks the number of output records very closely. This is likely because the function executes a query at every node in the hierarchy that uses an indexed search.
  • The pure SQL methods scale better through being able to do full table scans, and avoiding multiple query executions

Deep Slice Elapsed – CPU Times

The elapsed time minus the CPU times are shown in the first graph below, followed by the disk writes. The disk writes (and reads) are computed as the maximum values across the explain plan at the given data point, and are obtained from the system view v$sql_plan_statistics_all. The benchmarking framework gathers these and other statistics automatically.

  • The graphs show how the elapsed time minus CPU times track the disk accesses reasonably well
  • RSF_QRY does nearly twice as much disk writes as the other two

Wide Slice Results [w=180, d=(100, 120, 140, 160, 180)]

The performance characteristics of the three methods across the wide slice data points are pretty similar to those across the deep slice. The graphs are shown below.

Conclusions

  • For the example problem taken, the most efficient way to traverse fixed-level hierarchies is by a sequence of joins
  • Recursive methods are significantly worse, and the recursive function is especially inefficient because it performs large numbers of query executions using indexed searches, instead of full scans
  • The execution plans for the join sequence query gives an example of a sequence of hash joins with different choices of ‘join inputs’. It may be interesting to explore the different performance characteristics of the possible choices using hints in a subsequent article (Benchmarking of Hash Join Options in SQL for Fixed-Depth Hierarchies)
  • The output log is attached, and all code is on my GitHub project, GitHub: dim_bench_sql_oracle

Batch_Org






Dimensional Benchmarking of Bracket Parsing SQL

I noticed an interesting thread on OTN recently, Matching ( in a string. It’s about using SQL to find matching bracket pairs (technically ‘parentheses’ but ‘brackets’ is shorter, and it makes no difference to the SQL). Incidentally, I found recently that nested bracket expressions are a nice way of compactly representing the structure of complex hash join execution plans, A Note on Oracle Join Orders and Hints.

I thought it would be interesting to run some of the solution queries through my benchmarking package to test performance (A Framework for Dimensional Benchmarking of SQL Performance). I decided to consider only the queries that addressed multiple records, and the form of the problem that requires returning record uid, opening and closing bracket positions, plus the substring enclosed. These were by ‘mathguy’ and ‘Peter vd Zwan’, and I made very minor tweaks for consistency. I also wrote a query myself using PL/SQL in an inline SQL function using the new (v12.1) ‘WITH Function’ functionality, and copied this to a version using a pipelined database function to check for any performance differences. The four queries tested were then:

  • CBL_QRY, mathguy: Connect By, Analytics, Regex
  • MRB_QRY, Peter vd Zwan: Connect By, Match_Recognize
  • WFB_QRY, me: With PL/SQL Function, Arrays
  • PFB_QRY, me: Pipelined PL/SQL Function, Arrays

Bracket Pair Definition

Consider a function (BrDiff) defined at each character position as the difference between the number of opening and closing brackets to the left of, or at, that position.

A closing bracket closes an opening bracket if it is the first closing bracket where BrDiff is 1 less than BrDiff at the opening bracket. If all brackets are in some pair, then the expression can be considered well-formed.

This can be illustrated with a diagram for the fourth functional example below.

Test Problem

BRACKET_STRINGS Table

CREATE TABLE bracket_strings (id NUMBER, str VARCHAR2(4000))
/

Functional Test Data

I took four of mathguy’s test records, excluding the (deliberately) badly-formed strings, and which included some embedded returns:

Test Data

     ID STR
------- ----------------------------------------
      1  ((Hello ( Hi Hi hi ( A B C ( D)) (EF)
        why Whwy whyhhh )
        )
        )

      2 (1+3*(3-1) + 3*(2+1))
      3 ()()*(())a()(())
      4 b0(b1(b2(b3(x))(xy)))

Result

     ID      O_POS      C_POS STR
------- ---------- ---------- ----------------------------------------
      1          2         60 ((Hello ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )
                              )
                              )

      1          3         58 (Hello ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )
                              )

      1         10         56 ( Hi Hi hi ( A B C ( D)) (EF)
                              why Whwy whyhhh )

      1         21         33 ( A B C ( D))
      1         29         32 ( D)
      1         35         38 (EF)
      2          1         21 (1+3*(3-1) + 3*(2+1))
      2          6         10 (3-1)
      2         16         20 (2+1)
      3          1          2 ()
      3          3          4 ()
      3          6          9 (())
      3          7          8 ()
      3         11         12 ()
      3         13         16 (())
      3         14         15 ()
      4          3         21 (b1(b2(b3(x))(xy)))
      4          6         20 (b2(b3(x))(xy))
      4          9         15 (b3(x))
      4         12         14 (x)
      4         16         19 (xy)

21 rows selected.

All queries returned the expected results above.

Performance Test Data

Each test set consisted of 100 records with the str column containing the brackets expression dependent on width (w) and depth (d) parameters, as follows:

  • Each str column contains w bracket pairs
  • The str column begins with a 3-character record number
  • After the record number, the str column begins with d opening brackets with 3 characters of text, like: ‘(001’, etc., followed by the d closing brackets, then the remaining wd pairs in an unnested sequence, like ‘(001)’

When w=d the pairs are fully nested, and when d=0 there is no nesting, just a sequence of ‘(123)’ stringss.

This choice of test data sets allows us to see if both number of brackets, and bracket nesting have any effect on performance.

  • Depth fixed, at small width point; width varies: d=100, w=(100, 200, 300, 400)
  • Width fixed at high depth point; depth varies: w=400, d=(0, 100, 200, 300, 400)

The output from the test queries therefore consists of 100*w records with a record identifier and a bracketed string. For performance testing purposes the benchmarking framework writes the results to a file in csv format, while counting only the query steps in the query timing results.

All the queries showed strong time correlation with width, with smaller correlation with depth.

Connect By, Analytics, Regex Query (CBL_QRY)

WITH    d ( id, str, pos ) as (
      select id, str, regexp_instr(str, '\(|\)', 1, level)
      from   bracket_strings
      connect by level <= length(str) - length(translate(str, 'x()', 'x'))
             and prior id = id
             and prior sys_guid() is not null
    ),
    p ( id, str, pos, flag, l_ct, ct ) as (
      select id, str, pos, case substr(str, pos, 1) when '(' then 1 else -1 end,
             sum(case substr(str, pos, 1) when '(' then 1         end) over (partition by id order by pos),
             sum(case substr(str, pos, 1) when '(' then 1 else -1 end) over (partition by id order by pos)
      from   d
    ),
    f ( id, str, pos, flag, l_ct, ct, o_ct ) as (
      select id, str, pos, flag, l_ct, ct + case flag when 1 then 0 else 1 end as ct,
             row_number() over (partition by id, flag, ct order by pos)
      from   p
    )
select   /*+ CBL_QRY gather_plan_statistics */ id,
        min(case when flag =  1 then pos end) as o_pos,
        min(case when flag = -1 then pos end) as c_pos,
                                Substr (str, min(case when flag =  1 then pos end), min(case when flag = -1 then pos end) - min(case when flag =  1 then pos end) + 1) str
from    f
group by id, str, ct, o_ct
order by 1, 2

Plan hash value: 2736674058

------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                 |      1 |        |  40000 |00:00:16.30 |      43 |  40000 |  40000 |       |       |          |         |
|   1 |  SORT ORDER BY                      |                 |      1 |    100 |  40000 |00:00:16.30 |      43 |  40000 |  40000 |    21M|  1702K|   19M (0)|         |
|   2 |   HASH GROUP BY                     |                 |      1 |    100 |  40000 |00:00:16.24 |      43 |  40000 |  40000 |    85M|  7293K|   85M (0)|         |
|   3 |    VIEW                             |                 |      1 |    100 |  80000 |00:01:19.35 |      43 |  40000 |  40000 |       |       |          |         |
|   4 |     WINDOW SORT                     |                 |      1 |    100 |  80000 |00:01:19.27 |      43 |  40000 |  40000 |   175M|  4458K|   97M (1)|     157K|
|   5 |      VIEW                           |                 |      1 |    100 |  80000 |00:01:05.90 |      40 |  20000 |  20000 |       |       |          |         |
|   6 |       WINDOW SORT                   |                 |      1 |    100 |  80000 |00:01:05.86 |      40 |  20000 |  20000 |   175M|  4457K|   97M (0)|     157K|
|   7 |        VIEW                         |                 |      1 |    100 |  80000 |00:00:09.77 |      38 |      0 |      0 |       |       |          |         |
|*  8 |         CONNECT BY WITHOUT FILTERING|                 |      1 |        |  80000 |00:00:02.26 |      38 |      0 |      0 |   267K|   267K|  237K (0)|         |
|   9 |          TABLE ACCESS FULL          | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |      0 |      0 |       |       |          |         |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access("ID"=PRIOR NULL)

Notes on CBL_QRY

Subquery d uses regexp_instr with connect by to generate rows for each bracket.

Connect By, Match_Recognize Query (MRB_QRY)

WITH b as
(
select
  substr(str,level,1) s
  ,level n
  ,id
  ,str
from
  bracket_strings
connect by
id =  prior id
and substr(str,level,1) is not null
and prior sys_guid() is not null
)
select  /*+ MRB_QRY gather_plan_statistics */ 
  id
  ,o_pos
  ,c_pos
  ,substr(str,o_pos,c_pos - o_pos + 1) str
from
b
MATCH_RECOGNIZE (
partition by id
ORDER BY n
MEASURES 
  str as str
  ,FIRST( N) AS o_pos
  ,LAST( N) AS c_pos
one ROW PER MATCH
AFTER MATCH SKIP to next row
PATTERN (ob (ob | nb | cb)*? lcb)
DEFINE
  ob as ob.s = '('
  ,cb as cb.s = ')'
  ,nb as nb.s not in ('(',')')
  ,lcb as lcb.s = ')' and (count(ob.s) = count(cb.s) + 1)
) MR

Plan hash value: 2214599770

-----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                 |      1 |        |  40000 |00:03:38.26 |      40 |   4473K|  50075 |       |       |          |
|   1 |  SORT ORDER BY                   |                 |      1 |    100 |  40000 |00:03:38.26 |      40 |   4473K|  50075 |    21M|  1702K|   19M (0)|
|   2 |   VIEW                           |                 |      1 |    100 |  40000 |00:04:48.17 |      40 |   4473K|  50075 |       |       |          |
|   3 |    MATCH RECOGNIZE SORT          |                 |      1 |    100 |  40000 |00:04:48.16 |      40 |   4473K|  50075 |   440M|  6922K|  163M (0)|
|   4 |     VIEW                         |                 |      1 |    100 |    200K|00:00:04.65 |      38 |      0 |      0 |       |       |          |
|*  5 |      CONNECT BY WITHOUT FILTERING|                 |      1 |        |    200K|00:00:04.54 |      38 |      0 |      0 |   267K|   267K|  237K (0)|
|   6 |       TABLE ACCESS FULL          | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |      0 |      0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("ID"=PRIOR NULL)

Notes on MRB_QRY

This uses the v12.1 feature Match_Recognize operating on the charcaters in the string after conversion to rows.

With PL/SQL Function, Arrays Query (WFB_QRY)

WITH FUNCTION Parse_Brackets (p_str VARCHAR2) RETURN bra_lis_type IS /* WFB_QRY */ 
  c_n_ob       CONSTANT PLS_INTEGER := Length (p_str) - Length (Replace (p_str, '(', ''));
  l_ob_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  l_cb_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  TYPE b_rec_type   IS  RECORD (pos INTEGER, diff INTEGER);
  TYPE b_lis_type   IS  VARRAY(32767) OF b_rec_type;
  l_b_lis               b_lis_type := b_lis_type(NULL);
  l_bra_lis             bra_lis_type := bra_lis_type();
  n_b                   PLS_INTEGER := 0;
  n_ob                  PLS_INTEGER := 0;
  n_cb                  PLS_INTEGER := 0;
  l_chr                 VARCHAR2(1);
  l_o_diff              PLS_INTEGER;
BEGIN

  IF c_n_ob = 0 THEN
    RETURN NULL;
  END IF;
  l_ob_lis.EXTEND (c_n_ob);
  l_bra_lis.EXTEND (c_n_ob);
  l_cb_lis.EXTEND (c_n_ob);
  l_b_lis.EXTEND (c_n_ob + c_n_ob);
 
  FOR i IN 1..Length (p_str) LOOP
 
    l_chr := Substr (p_str, i, 1);
    IF l_chr NOT IN ('(', ')') THEN CONTINUE; END IF;

    n_b := n_b + 1;
    l_b_lis(n_b).pos := i;
 
    IF l_chr = '(' THEN
      n_ob := n_ob + 1;
      l_ob_lis(n_ob) := n_b;
    ELSE
      n_cb := n_cb + 1;
      l_cb_lis(n_cb) := n_b;
    END IF;

    l_b_lis(n_b).diff := n_ob - n_cb;
 
  END LOOP;

  FOR i IN 1..n_ob LOOP

    l_o_diff := l_b_lis (l_ob_lis(i)).diff;
    FOR j IN 1..n_cb LOOP

      IF l_b_lis (l_cb_lis(j)).pos < l_b_lis (l_ob_lis(i)).pos THEN CONTINUE; END IF;
      IF l_o_diff = l_b_lis (l_cb_lis(j)).diff + 1 THEN

        l_bra_lis(i) := bra_rec_type (l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos, Substr (p_str, l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos - l_b_lis(l_ob_lis(i)).pos + 1));
        EXIT;

      END IF;

    END LOOP;
 
  END LOOP;
  RETURN l_bra_lis;

END;
SELECT
    b.id, t.o_pos, t.c_pos, t.str
  FROM bracket_strings b
  OUTER APPLY TABLE (Parse_Brackets (b.str)) t
 ORDER BY 1, 2

Notes on WFB_QRY

This uses the v12.1 feature whereby a PL/SQL function can be included directly in a query.

Pipelined PL/SQL Function, Arrays Query (PFB_QRY)

WFB_QRY – Pipelined Function

CREATE OR REPLACE TYPE bra_rec_type IS OBJECT (o_pos INTEGER, c_pos INTEGER, str VARCHAR2(4000));
/
CREATE TYPE bra_lis_type IS VARRAY(4000) OF bra_rec_type;
/
FUNCTION Parse_Brackets (p_str VARCHAR2) RETURN bra_lis_type PIPELINED IS
  c_n_ob       CONSTANT PLS_INTEGER := Length (p_str) - Length (Replace (p_str, '(', ''));
  l_ob_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  l_cb_lis              SYS.ODCINumberList := SYS.ODCINumberList();
  TYPE b_rec_type   IS  RECORD (pos INTEGER, diff INTEGER);
  TYPE b_lis_type   IS  VARRAY(32767) OF b_rec_type;
  l_b_lis               b_lis_type := b_lis_type(NULL);
  l_bra_lis             bra_lis_type := bra_lis_type();
  n_b                   PLS_INTEGER := 0;
  n_ob                  PLS_INTEGER := 0;
  n_cb                  PLS_INTEGER := 0;
  l_chr                 VARCHAR2(1);
  l_o_diff              PLS_INTEGER;
BEGIN

  IF c_n_ob = 0 THEN
    RETURN;
  END IF;
  l_ob_lis.EXTEND (c_n_ob);
  l_bra_lis.EXTEND (c_n_ob);
  l_cb_lis.EXTEND (c_n_ob);
  l_b_lis.EXTEND (c_n_ob + c_n_ob);
 
  FOR i IN 1..Length (p_str) LOOP
 
    l_chr := Substr (p_str, i, 1);
    IF l_chr NOT IN ('(', ')') THEN CONTINUE; END IF;

    n_b := n_b + 1;
    l_b_lis(n_b).pos := i;
 
    IF l_chr = '(' THEN
      n_ob := n_ob + 1;
      l_ob_lis(n_ob) := n_b;
    ELSE
      n_cb := n_cb + 1;
      l_cb_lis(n_cb) := n_b;
    END IF;

    l_b_lis(n_b).diff := n_ob - n_cb;
 
  END LOOP;

  FOR i IN 1..n_ob LOOP

    l_o_diff := l_b_lis (l_ob_lis(i)).diff;
    FOR j IN 1..n_cb LOOP

      IF l_b_lis (l_cb_lis(j)).pos < l_b_lis (l_ob_lis(i)).pos THEN CONTINUE; END IF;
      IF l_o_diff = l_b_lis (l_cb_lis(j)).diff + 1 THEN

        PIPE ROW (bra_rec_type (l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos, Substr (p_str, l_b_lis(l_ob_lis(i)).pos, l_b_lis(l_cb_lis(j)).pos - l_b_lis(l_ob_lis(i)).pos + 1)));
        EXIT;

      END IF;

    END LOOP;
 
  END LOOP;

END Parse_Brackets;

Plan hash value: 3367347570

---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |  40000 |00:00:01.17 |      38 |       |       |          |
|   1 |  SORT ORDER BY                       |                 |      1 |    816K|  40000 |00:00:01.17 |      38 |    21M|  1702K|   19M (0)|
|   2 |   NESTED LOOPS OUTER                 |                 |      1 |    816K|  40000 |00:00:16.80 |      38 |       |       |          |
|   3 |    TABLE ACCESS FULL                 | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |       |       |          |
|   4 |    VIEW                              | VW_LAT_D4FD8C38 |    100 |   8168 |  40000 |00:00:01.13 |       0 |       |       |          |
|   5 |     COLLECTION ITERATOR PICKLER FETCH| PARSE_BRACKETS  |    100 |   8168 |  40000 |00:00:01.10 |       0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------

PFB_QRY – Query

SELECT  /*+ PFB_QRY gather_plan_statistics */
        b.id        id, 
        t.o_pos     o_pos, 
        t.c_pos     c_pos,
        t.str       str
  FROM bracket_strings b
  OUTER APPLY TABLE (Strings.Parse_Brackets (b.str)) t
 ORDER BY b.id, t.o_pos

Plan hash value: 3367347570

---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                 |      1 |        |  40000 |00:00:01.15 |      38 |       |       |          |
|   1 |  SORT ORDER BY                       |                 |      1 |    816K|  40000 |00:00:01.15 |      38 |    21M|  1702K|   19M (0)|
|   2 |   NESTED LOOPS OUTER                 |                 |      1 |    816K|  40000 |00:00:01.95 |      38 |       |       |          |
|   3 |    TABLE ACCESS FULL                 | BRACKET_STRINGS |      1 |    100 |    100 |00:00:00.01 |      38 |       |       |          |
|   4 |    VIEW                              | VW_LAT_D4FD8C38 |    100 |   8168 |  40000 |00:00:01.51 |       0 |       |       |          |
|   5 |     COLLECTION ITERATOR PICKLER FETCH| PARSE_BRACKETS  |    100 |   8168 |  40000 |00:00:01.50 |       0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------

Notes on PFB_QRY

This uses PL/SQL pipelined database function which is called in the query.

Performance Testing Results

Key

If CPU time (y) is proportional to the varying dimension (x), for data points 1 and 2, we would have:

y = kx

and so, for any two data points 1 and 2:

y2.x1/(y1.x2) = 1

We can take the actual value of the linear ratio as a marker for linear proportionality or not.

If CPU time (y) is proportional to the varying dimension (x), for data points 1 and 2, we would have:

y = kx.x

and so, for any two data points 1 and 2:

y2.x1.x1/(y1.x2.x2) = 1

We can take the actual value of the quadratic ratio as a marker for quadratic proportionality or not.

  • LRTB = Ratio to best time at high data point
  • RTP_L = Linear ratio as defined above, averaged over successive data points
  • RTP_Q = Quadratic ratio as defined above, averaged over successive data points

Depth fixed, at small width point; width varies: d=100, w=(100, 200, 300, 400)

Notes on Results for Fixed Depth

  • CPU time increases with width for all queries at above a linear rate
  • CBL_QRY is significantly faster than MRB_QRY, which appears to be rising quadratically
  • Both WFB_QRY and PFB_QRY are much faster than the non-PL/SQL queries
  • PFB_QRY is slightly faster than WFB_QRY. This could be regarded as too small a difference to be significant, but is consistent across the data points, despite the context switching with database function calls

Width fixed at high depth point; depth varies: w=400, d=(0, 100, 200, 300, 400)

Notes on Results for Fixed Width

  • CPU time increases with depth for all queries although at a sublinear rate
  • CBL_QRY has a big jump in CPU time between 0 and 100, where nesting starts to come in, and was actually faster than CBL_QRY without nesting






Oracle and JUnit Data Driven Testing: An Example

In Design Patterns for Database API Testing 1: Web Service Saving 2 – Code I identified a number of antipatterns commonly seen in database testing.

utPLSQL, originally developed by Steve Feuerstein around 15 years ago, may have been the first Oracle unit testing framework and seems to be the most widely used. Its new utPLSQL GitHub Page describes itself thus:

The official new home of utPLSQL, unit testing framework for PL/SQL, based on xUnit.

xUnit refers to the family of unit testing frameworks derived from Java’s JUnit, and present in most object-oriented languages, such as Ruby, Python etc.

It has occurred to me that some of the problems with unit testing in the database world may arise from translating object-oriented ideas on testing too zealously into the database world, where they may not work so well. It could also impinge on the design of base code, in that in the object-oriented world any complexity in unit tests is usually seen as a ‘code smell’ indicating the base units are too complex; testability is seen as a key objective in OO module design. To gain some insight into the differences between database and object-oriented testing, it seemed like a good idea to try to test the same functionality in both Java and Oracle. This article gives the results of this experiment.

I have submitted an abstract for a presentation entitled “Database Functional Testing: Automation, Yes; xUnit-Style Micro-Testing, No” to one of the Oracle User Group conferences for Q1 2017, but it has not yet been approved or rejected.

Update, 10 May 2017: My abstract for the Ireland OUG in March 2017 was not accepted. I did present on a second abstract that was accepted: Dimensional Performance Benchmarking of SQL – IOUG Presentation

Testing Example

In Java unit testing one normally tries to test code in isolation, without database access or other dependencies, whereas in Oracle it is normally database operations that are being tested. As a compromise I chose to implement code that would read a single key column from a CSV file and store counts of the key values in memory, with methods to return these key-value pairs as lists either unordered, ordered by key, or ordered by value (then key).

The constructor method will take three parameters:

  • File name
  • Delimiter
  • Key column position

We’ll use the same file name, which in Java will include the full path, whereas Oracle will assume the path from an Oracle directory object. There will be two scenarios that will test different values for the other two parameters simultaneously (as what I call sub-scenarios in Design Patterns for Database API Testing 1: Web Service Saving 1 – Design).

For the unordered method, we will validate only the scalar count of the records returned, while for the ordered methods we will validate the full ordered lists of tuples returned. For illustrative purposes one test method for each scenario will have a deliberate error in the expected values.

Test Scenario 1 – Tie-break/single-delimiter/interior column

Here the test file has four columns, with the key column being the third, and the delimiter a single character ‘,’. The file contains lines:

0,1,Cc,3
00,1,A,9
000,1,B,27
0000,1,A,81

Note that keys ‘Cc’ and ‘B’ both occur once, and when ordered by value (then key) ‘B’ should appear before ‘Cc’.

Test Scenario 2 – Two copies/double-delimiter/first column

Here the test file has three columns, with the key column being the first, and the delimiter two characters ‘;;’. The file contains two identical lines:

X;;1;;A
X;;1;;A

Java – JUnit

JUnit Code

package colgroup;
/***************************************************************************************************
Name:        TestColGroup.java

Description: Junit testing class for Col_Group class. Uses Parameterized.class to data-drive
                                                                               
Modification History
Who                  When        Which What
-------------------- ----------- ----- -------------------------------------------------------------
B. Furey             22-Oct-2016 1.0   Created                       

***************************************************************************************************/
import static org.junit.Assert.*;

import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.AbstractMap;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.List;
import java.util.Map;

import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized.Parameters;
import org.junit.runners.Parameterized;

@RunWith(Parameterized.class)
public class TestColGroup {
/***************************************************************************************************

Private instance variables: 2 scenarios, input, and expected records declared here, initially in 
2-level generic arrays, but expected records transferred to List for assertion

***************************************************************************************************/
  private ColGroup colGroup = null;
  private String testFile = "H:/Script/Input/ut_group.csv";
  private String[][] testLines = new String[][] { 
      {"0,1,Cc,3", "00,1,A,9", "000,1,B,27", "0000,1,A,81"}, 
      {"X;;1;;A", "X;;1;;A"}
  };
  private String[] testDelim = new String[] {",", ";;"};
  private int[] testColnum = new int[] {2, 0};
  private List<String> lines;
  private String delim;
  private int colnum;

  private String[][] keysK = new String[][] { 
      {"A", "Bx", "Cc"}, 
      {"X"}
  };
  private int[][] valuesK = new int[][] { 
      {2, 1, 1}, 
      {2}
  };
  private String[][] keysV = new String[][] { 
      {"B", "Cc", "A"},
      {"X"}
  };
  private int[][] valuesV = new int[][] { 
      {1, 1, 2}, 
      {2}
  };
  private int expAsIs;
  private List<Map.Entry<String,Long>> expListK = null;
  private List<Map.Entry<String,Long>> expListV = null;

  private void addMap (int i, String strValK, int lonValK, String strValV, int lonValV) {
    expListK.add (i, new AbstractMap.SimpleEntry<String, Long> (strValK, (long) lonValK));
    expListV.add (i, new AbstractMap.SimpleEntry<String, Long> (strValV, (long) lonValV));
  }

  private int testNum;

  /***************************************************************************************************

  TestColGroup: Constructor function, which sets the instance variables for given scenario (testNum), and
                is called before each test with parameters passed via test_data (see end)

  ***************************************************************************************************/
  public TestColGroup (int   testNum,   // test scenario number
                  int   nGroups) { // number of groups

    System.out.println("Doing TestCG3 before test "+testNum+"...");
    this.lines = Arrays.asList (testLines[testNum]);
    this.delim = testDelim[testNum];
    this.colnum = testColnum[testNum];
    this.expAsIs = nGroups;
    this.testNum = testNum;
    int i = 0;
    expListK = new ArrayList<Map.Entry<String,Long>>(keysK[testNum].length);
    expListV = new ArrayList<Map.Entry<String,Long>>(keysV[testNum].length);
    for (String k : keysK[testNum]) {
      addMap (i, k, valuesK[testNum][i], keysV[testNum][i], valuesV[testNum][i]);
      i++;
    }
  }
  /***************************************************************************************************

  getGroup: Before each test method to write the test file and instantiate base object, using instance
            variables set for the scenario in TestCG3

  ***************************************************************************************************/
  @Before
  public void getGroup() {
    try {
      System.out.println("Doing setup before test "+this.testNum+"...");
      Files.write (Paths.get (testFile), lines, StandardCharsets.UTF_8);
      colGroup = new ColGroup (testFile, delim, colnum);
    } catch (IOException e) {
      e.printStackTrace();
    }
  }
  /***************************************************************************************************

  delFile: After each test method to delete the test file

  ***************************************************************************************************/
  @After
  public void delFile() {
    try {
      System.out.println("Doing teardown after test "+this.testNum+"...");
      Files.delete(Paths.get (testFile));
    } catch (IOException e) {
      e.printStackTrace();
    }
  }
  /***************************************************************************************************

  test*: Test method for each base method; each one is run once for each record defined in test_data
         in @Parameters

  ***************************************************************************************************/
  @Test
  public void testAsIs() {

    List<Map.Entry<String,Long>> actList = colGroup.listAsIs();
    assertEquals ("(as is)", expAsIs, actList.size());
    colGroup.prList("(as is)", actList);
  }
  @Test
  public void testKey() {

    List<Map.Entry<String,Long>> actList = colGroup.sortByKey();
    assertEquals ("keys", expListK, actList);
    colGroup.prList("keys", actList);
  }
  @Test
  public void testValue() {
    List<Map.Entry<String,Long>> actList = colGroup.sortByValue();
    assertEquals ("values", expListV, actList);
    colGroup.prList("values", actList);
  }
  /***************************************************************************************************

  test_data: @Parameters section allows passing of data into tests per scenario; neater to pass in a
             pointer to the instance arrays for most of the data

  ***************************************************************************************************/
  @Parameters
  public static Collection<Object[]> test_data() {
    Object[][] data = new Object[][] { {0, 3}, {1, 2} }; // 2 records, columns = scenario #, # groups
    return Arrays.asList(data);
  }
}

JUnit Output

Unit Test Results.

Designed for use with JUnit and Ant.

All Tests

Class Name Status Type Time(s)
TestColGroup testKey[0] Failure keys expected:<[A=2, Bx=1, Cc=1]> but was:<[A=2, B=1, Cc=1]>

junit.framework.AssertionFailedError: keys expected:<[A=2, Bx=1, Cc=1]> but was:<[A=2, B=1, Cc=1]>
at colgroup.TestColGroup.testKey(TestColGroup.java:150)

0.055
TestColGroup testValue[0] Success 0.004
TestColGroup testAsIs[0] Success 0.002
TestColGroup testKey[1] Success 0.004
TestColGroup testValue[1] Success 0.002
TestColGroup testAsIs[1] Failure (as is) expected:<2> but was:<1>

junit.framework.AssertionFailedError: (as is) expected:<2> but was:<1>
at colgroup.TestColGroup.testAsIs(TestColGroup.java:143)

0.002

JUnit Notes

  • JUnit first creates instances of the test class for each test method and starts running the tests after each instance is created
  • From JUnit 4 it is possible to data-drive testing by means of the @Parameters annotation, as implemented here, whereas I identified lack of data-driving as a common antipattern
  • Test methods are identifiable only by their method names rather than full descriptions
  • Scenarios are identifiable only by a number, which is even worse
  • Execution of a test method (instance) is aborted on failure of any assertion
  • Here there is only one test method per base method, but in general there could be several
  • JUnit aborting on assertion failure means that unit tests should have one or very few assertions, with additional unit tests being generated where necessary
  • Data-driving allows JUnit to generate additional unit tests from a single method at run-time for each scenario
  • A good approach is to start with a single parameterized scenario, then add new scenarios just by adding data records; this is how I usually proceed in Oracle
  • On assertion failure JUnit prints expected and actual values for both scalar and complex values passed to the assertion

Oracle – utPLSQL

utPLSQL Code

CREATE OR REPLACE PACKAGE BODY UT_Col_Group AS
/***************************************************************************************************

Description: utPLSQL unit testing for polyglot group-counting module, Col_Group
                                                                               
Modification History
Who                  When        Which What
-------------------- ----------- ----- -------------------------------------------------------------
Brendan Furey        30-Oct-2016 1.0   Created

***************************************************************************************************/

c_proc_name_asis        CONSTANT VARCHAR2(60) := 'Col_Group.ut_AIP_List_Asis';
c_proc_name_key         CONSTANT VARCHAR2(60) := 'Col_Group.ut_AIP_Sort_By_Key';
c_proc_name_value       CONSTANT VARCHAR2(60) := 'Col_Group.ut_AIP_Sort_By_vALUE';

c_file_2lis             CONSTANT L2_chr_arr := L2_chr_arr (
                                      L1_chr_arr ('0,1,Cc,3', '00,1,A,9', '000,1,B,27', '0000,1,A,81'),
                                      L1_chr_arr ('X;;1;;A', 'X;;1;;A')
);
c_prms_2lis             CONSTANT L2_chr_arr := L2_chr_arr (
                                      L1_chr_arr ('lines.csv', ',', '3'), L1_chr_arr ('lines.csv', ';;', '1')
);
c_scenario_lis          CONSTANT L1_chr_arr := L1_chr_arr ('Tie-break/single-delimiter/interior column', 'Two copies/double-delimiter/first column');

/***************************************************************************************************

ut_Setup, ut_Teardown: Mandatory procedures for utPLSQL but don't do anything here

***************************************************************************************************/
PROCEDURE ut_Setup IS
BEGIN
  NULL;
END ut_Setup;

PROCEDURE ut_Teardown IS
BEGIN
  NULL;
END ut_Teardown;

/***************************************************************************************************

Do_Test: Main local procedure for utPLSQL unit testing Col_Group methods

***************************************************************************************************/

PROCEDURE Do_Test (p_proc_name          VARCHAR2,      -- procedure name
                   p_exp_2lis           L2_chr_arr) IS -- expected values 2-d array

  /***************************************************************************************************

  Setup: Setup procedure for unit testing Col_Group package. Writes test file, then calls
         constructor API to store data in an array, line counts grouped by key

  ***************************************************************************************************/
  PROCEDURE Setup (p_file                 VARCHAR2,      -- file name
                   p_delim                VARCHAR2,      -- delimiter
                   p_colnum               PLS_INTEGER,   -- key column number in file
                   p_dat_lis              L1_chr_arr) IS -- lines to write to test file

  BEGIN

    Utils.Delete_File (p_file);
    Utils.Write_File (p_file, p_dat_lis);

    Col_Group.AIP_Load_File (p_file => p_file, p_delim => p_delim, p_colnum => p_colnum);

  END Setup;

  /***************************************************************************************************

  Call_Proc: Calls the base method according to calling procedure, and uses utPLSQL assert procedure
             to assert list counts, and for ordered methods, record lists in delimited form

  ***************************************************************************************************/
  PROCEDURE Call_Proc (p_exp_lis        L1_chr_arr,  -- expected values list (delimited records)
                       p_scenario       VARCHAR2) IS -- scenario description

    l_arr_lis           chr_int_arr;
  BEGIN

    l_arr_lis := CASE p_proc_name
                   WHEN c_proc_name_asis        THEN Col_Group.AIP_List_Asis
                   WHEN c_proc_name_key         THEN Col_Group.AIP_Sort_By_Key
                   WHEN c_proc_name_value       THEN Col_Group.AIP_Sort_By_Value
                 END;

    IF p_proc_name = c_proc_name_asis THEN

      utAssert.Eq (p_scenario || ': List count', l_arr_lis.COUNT, p_exp_lis(1), TRUE);

    ELSE

      utAssert.Eq (p_scenario || ': List count', l_arr_lis.COUNT, p_exp_lis.COUNT, TRUE);
      FOR i IN 1..LEAST (l_arr_lis.COUNT, p_exp_lis.COUNT) LOOP

        utAssert.Eq ('...Record', Utils.List_Delim (l_arr_lis(i).chr_field, l_arr_lis(i).int_field), p_exp_lis(i), TRUE);

      END LOOP;

    END IF;

  END Call_Proc;

BEGIN

  FOR i IN 1..c_file_2lis.COUNT LOOP

    Setup (p_file              => c_prms_2lis(i)(1),
           p_delim             => c_prms_2lis(i)(2),
           p_colnum            => c_prms_2lis(i)(3),
           p_dat_lis           => c_file_2lis(i));

    Call_Proc (p_exp_2lis(i), c_scenario_lis(i));

  END LOOP;

END Do_Test;

/***************************************************************************************************

ut_AIP_List_Asis: Entry procedure for utPLSQL testing Col_Group.AIP_List_Asis

***************************************************************************************************/
PROCEDURE ut_AIP_List_Asis IS

  c_proc_name           CONSTANT VARCHAR2(61) := c_proc_name_asis;
  c_exp_2lis            CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr('3'), L1_chr_arr('2'));

BEGIN

  Do_Test (c_proc_name, c_exp_2lis);

END ut_AIP_List_Asis;

/***************************************************************************************************

ut_AIP_Sort_By_Key: Entry procedure for utPLSQL testing Col_Group.AIP_Sort_By_Key

***************************************************************************************************/
PROCEDURE ut_AIP_Sort_By_Key IS

  c_proc_name           CONSTANT VARCHAR2(61) := c_proc_name_key;
  c_exp_2lis            CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr (Utils.List_Delim ('A','2'),
                                                                         Utils.List_Delim ('Bx','1'),
                                                                         Utils.List_Delim ('Cc','1')),
                                                             L1_chr_arr (Utils.List_Delim ('X','2'))
                                               );
BEGIN

  Do_Test (c_proc_name, c_exp_2lis);

END ut_AIP_Sort_By_Key;

/***************************************************************************************************

ut_AIP_Sort_By_Value: Entry procedure for utPLSQL testing Col_Group.AIP_Sort_By_Value

***************************************************************************************************/
PROCEDURE ut_AIP_Sort_By_Value IS

  c_proc_name           CONSTANT VARCHAR2(61) := c_proc_name_value;
  c_exp_2lis            CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr (Utils.List_Delim ('B','1'),
                                                                         Utils.List_Delim ('Cc','1'),
                                                                         Utils.List_Delim ('A','2')),
                                                             L1_chr_arr (Utils.List_Delim ('X','2'))
                                               );
BEGIN

  Do_Test (c_proc_name, c_exp_2lis);

END ut_AIP_Sort_By_Value;

END UT_Col_Group;
/

utPLSQL Output

.
>  FFFFFFF   AA     III  L      U     U RRRRR   EEEEEEE
>  F        A  A     I   L      U     U R    R  E
>  F       A    A    I   L      U     U R     R E
>  F      A      A   I   L      U     U R     R E
>  FFFF   A      A   I   L      U     U RRRRRR  EEEE
>  F      AAAAAAAA   I   L      U     U R   R   E
>  F      A      A   I   L      U     U R    R  E
>  F      A      A   I   L       U   U  R     R E
>  F      A      A  III  LLLLLLL  UUU   R     R EEEEEEE
.
 FAILURE: ".COL_GROUP"
.
> Individual Test Case Results:
>
SUCCESS - COL_GROUP.UT_AIP_LIST_ASIS: EQ "Tie-break/single-delimiter/interior column: List count" Expected "3" and got "3"
>
FAILURE - COL_GROUP.UT_AIP_LIST_ASIS: EQ "Two copies/double-delimiter/first column: List count" Expected "2" and got "1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "Tie-break/single-delimiter/interior column: List count" Expected "3" and got "3"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "...Record" Expected "A|2" and got "A|2"
>
FAILURE - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "...Record" Expected "Bx|1" and got "B|1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "...Record" Expected "Cc|1" and got "Cc|1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "Two copies/double-delimiter/first column: List count" Expected "1" and got "1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_KEY: EQ "...Record" Expected "X|2" and got "X|2"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "Tie-break/single-delimiter/interior column: List count" Expected "3" and got "3"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "...Record" Expected "B|1" and got "B|1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "...Record" Expected "Cc|1" and got "Cc|1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "...Record" Expected "A|2" and got "A|2"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "Two copies/double-delimiter/first column: List count" Expected "1" and got "1"
>
SUCCESS - COL_GROUP.UT_AIP_SORT_BY_VALUE: EQ "...Record" Expected "X|2" and got "X|2"
>
>
> Errors recorded in utPLSQL Error Log:
>
> NONE FOUND

PL/SQL procedure successfully completed.

utPLSQL Notes

  • Code is shared between the three test methods by means of a common local procedure Do_Test
  • Data-driving is achieved by using generic arrays and looping over scenarios
  • Only the simple assertion procedure utAssert.Eq is used; from experience the complex utPLSQL assertions are rarely suitable, and I think it is conceptually simpler to avoid them altogether
  • In general the lists of actual and expected values may have different cardinalities, so looping over them I use the minimum cardinality as loop maximum, and explicitly assert the counts; this means you may not see the detail for the unmatched records – in my own TRAPIT framework I handle this case by adding null records to the smaller list
  • Delimited records are asserted to limit the number of assertions, which would matter more in realistic cases having larger number of columns
  • utPLSQL does not have any data model for scenarios or output groups, with just a single description field available within the assert call to describe the scenario/group/assertion; I recommend bundling scenario and group information into this message for want of anything better
  • utPLSQL allows only one test method per base method, unlike JUnit, so multiple assertions may be necessary; fortunately an assertion failure does not abort the test procedure
  • utPLSQL test procedures run sequentially, in alphabetical order, unlike JUnit

Oracle – TRAPIT

This is the framework I described in articles listed at TRAPIT – TRansactional API Testing in Oracle. I have renamed the framework, dropping the phrase ‘unit testing’ because it has connotations from JUnit of testing small, isolated pieces of code, which is not what it is intended for.

TRAPIT Code

CREATE OR REPLACE PACKAGE BODY TT_Col_Group AS
/***************************************************************************************************

Description: TRAPIT (TRansactional API Testing) package for Col_Group

Further details: 'TRAPIT - TRansactional API Testing in Oracle'
                 
TRAPIT - TRansactional API Testing in Oracle
Modification History Who When Which What -------------------- ----------- ----- ------------------------------------------------------------- Brendan Furey 22-Oct-2016 1.0 Created Brendan Furey 13-Nov-2016 1.1 Utils_TT -> Utils_TT ***************************************************************************************************/ c_ms_limit CONSTANT PLS_INTEGER := 2; c_proc_name_asis CONSTANT VARCHAR2(60) := 'Col_Group.tt_AIP_List_Asis'; c_proc_name_key CONSTANT VARCHAR2(60) := 'Col_Group.tt_AIP_Sort_By_Key'; c_proc_name_value CONSTANT VARCHAR2(60) := 'Col_Group.tt_AIP_Sort_By_vALUE'; c_file_2lis CONSTANT L2_chr_arr := L2_chr_arr ( L1_chr_arr ('0,1,Cc,3', '00,1,A,9', '000,1,B,27', '0000,1,A,81'), L1_chr_arr ('X;;1;;A', 'X;;1;;A') ); c_prms_2lis CONSTANT L2_chr_arr := L2_chr_arr ( L1_chr_arr ('lines.csv', ',', '3'), L1_chr_arr ('lines.csv', ';;', '1') ); c_scenario_lis CONSTANT L1_chr_arr := L1_chr_arr ('Tie-break/single-delimiter/interior column', 'Two copies/double-delimiter/first column'); c_inp_group_lis CONSTANT L1_chr_arr := L1_chr_arr ('Parameter', 'File'); c_inp_field_2lis CONSTANT L2_chr_arr := L2_chr_arr ( L1_chr_arr ('File Name', 'Delimiter', '*Column'), L1_chr_arr ('Line') ); c_out_group_lis CONSTANT L1_chr_arr := L1_chr_arr ('Sorted Array'); c_out_fields_2lis CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr ('Key', '*Count')); /*************************************************************************************************** Do_Test: Main local procedure for TRAPIT testing Col_Group methods ***************************************************************************************************/ PROCEDURE Do_Test (p_proc_name VARCHAR2, p_exp_2lis L2_chr_arr, p_out_group_lis L1_chr_arr, p_out_fields_2lis L2_chr_arr) IS l_timer_set PLS_INTEGER; l_inp_3lis L3_chr_arr := L3_chr_arr(); l_act_2lis L2_chr_arr := L2_chr_arr(); /*************************************************************************************************** Setup: Setup procedure for TRAPIT testing Col_Group package. Writes test file, then calls constructor API to store data in an array, line counts grouped by key ***************************************************************************************************/ PROCEDURE Setup (p_file VARCHAR2, -- file name p_delim VARCHAR2, -- delimiter p_colnum PLS_INTEGER, -- key column number in file p_dat_lis L1_chr_arr, -- lines to write to test file x_inp_2lis OUT L2_chr_arr) IS -- generic inputs list BEGIN Utils.Delete_File (p_file); Utils.Write_File (p_file, p_dat_lis); x_inp_2lis := L2_chr_arr (L1_chr_arr (Utils.List_Delim (p_file, p_delim, p_colnum)), p_dat_lis); Col_Group.AIP_Load_File (p_file => p_file, p_delim => p_delim, p_colnum => p_colnum); END Setup; /*************************************************************************************************** Call_Proc: Calls the base method according to calling procedure, and converts record lists to delimited form, and populates the actual list for later checking ***************************************************************************************************/ PROCEDURE Call_Proc (x_act_lis OUT L1_chr_arr) IS -- actual values list (delimited records) l_arr_lis chr_int_arr; l_act_lis L1_chr_arr := L1_chr_arr(); BEGIN l_arr_lis := CASE p_proc_name WHEN c_proc_name_asis THEN Col_Group.AIP_List_Asis WHEN c_proc_name_key THEN Col_Group.AIP_Sort_By_Key WHEN c_proc_name_value THEN Col_Group.AIP_Sort_By_Value END; Timer_Set.Increment_Time (l_timer_set, Utils_TT.c_call_timer); l_act_lis.EXTEND (l_arr_lis.COUNT); FOR i IN 1..l_arr_lis.COUNT LOOP l_act_lis(i) := Utils.List_Delim (l_arr_lis(i).chr_field, l_arr_lis(i).int_field); END LOOP; x_act_lis := CASE p_proc_name WHEN c_proc_name_asis THEN L1_chr_arr(l_arr_lis.COUNT) ELSE l_act_lis END; END Call_Proc; BEGIN l_timer_set := Utils_TT.Init (p_proc_name); l_act_2lis.EXTEND (c_file_2lis.COUNT); l_inp_3lis.EXTEND (c_file_2lis.COUNT); FOR i IN 1..c_file_2lis.COUNT LOOP Setup (p_file => c_prms_2lis(i)(1), p_delim => c_prms_2lis(i)(2), p_colnum => c_prms_2lis(i)(3), p_dat_lis => c_file_2lis(i), -- data file inputs x_inp_2lis => l_inp_3lis(i)); Timer_Set.Increment_Time (l_timer_set, 'Setup'); Call_Proc (l_act_2lis(i)); END LOOP; Utils_TT.Check_TT_Results (p_proc_name, c_scenario_lis, l_inp_3lis, l_act_2lis, p_exp_2lis, l_timer_set, c_ms_limit, c_inp_group_lis, c_inp_field_2lis, p_out_group_lis, p_out_fields_2lis); END Do_Test; /*************************************************************************************************** tt_AIP_List_Asis: Entry procedure for TRAPIT testing Col_Group.AIP_List_Asis ***************************************************************************************************/ PROCEDURE tt_AIP_List_Asis IS c_proc_name CONSTANT VARCHAR2(61) := c_proc_name_asis; c_exp_2lis CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr('3'), L1_chr_arr('2')); c_out_group_lis CONSTANT L1_chr_arr := L1_chr_arr ('Counts'); c_out_fields_2lis CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr ('*#Records')); BEGIN Do_Test (c_proc_name, c_exp_2lis, c_out_group_lis, c_out_fields_2lis); END tt_AIP_List_Asis; /*************************************************************************************************** tt_AIP_Sort_By_Key: Entry procedure for TRAPIT testing Col_Group.AIP_Sort_By_Key ***************************************************************************************************/ PROCEDURE tt_AIP_Sort_By_Key IS c_proc_name CONSTANT VARCHAR2(61) := c_proc_name_key; c_exp_2lis CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr (Utils.List_Delim ('A','2'), Utils.List_Delim ('Bx','1'), Utils.List_Delim ('Cc','1')), L1_chr_arr (Utils.List_Delim ('X','2')) ); BEGIN Do_Test (c_proc_name, c_exp_2lis, c_out_group_lis, c_out_fields_2lis); END tt_AIP_Sort_By_Key; /*************************************************************************************************** tt_AIP_Sort_By_Value: Entry procedure for TRAPIT testing Col_Group.AIP_Sort_By_Value ***************************************************************************************************/ PROCEDURE tt_AIP_Sort_By_Value IS c_proc_name CONSTANT VARCHAR2(61) := c_proc_name_value; c_exp_2lis CONSTANT L2_chr_arr := L2_chr_arr (L1_chr_arr (Utils.List_Delim ('B','1'), Utils.List_Delim ('Cc','1'), Utils.List_Delim ('A','2')), L1_chr_arr (Utils.List_Delim ('X','2')) ); BEGIN Do_Test (c_proc_name, c_exp_2lis, c_out_group_lis, c_out_fields_2lis); END tt_AIP_Sort_By_Value; END TT_Col_Group; /

TRAPIT Output


PL/SQL procedure successfully completed.


TEXT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

TRAPIT TEST: Col_Group.tt_AIP_List_Asis
=======================================

SCENARIO 1: Tie-break/single-delimiter/interior column {
========================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ,               3

        }
        =

        GROUP File {
        ============

            Line
            -----------
            0,1,Cc,3
            00,1,A,9
            000,1,B,27
            0000,1,A,81

        }
        =

    OUTPUTS
    =======

        GROUP Counts: Actual = 1, Expected = 1 {
        ========================================

            F?  #Records
            --  --------
                       3

        } 0 failed, of 1: SUCCESS
        =========================

} 0 failed, of 1: SUCCESS
=========================

SCENARIO 2: Two copies/double-delimiter/first column {
======================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ;;              1

        }
        =

        GROUP File {
        ============

            Line
            -------
            X;;1;;A
            X;;1;;A

        }
        =

    OUTPUTS
    =======

        GROUP Counts: Actual = 1, Expected = 1 {
        ========================================

            F?  #Records
            --  --------
            F          1
            >          2

        } 1 failed, of 1: FAILURE
        =========================

} 1 failed, of 1: FAILURE
=========================

TIMING: Actual = 0, Expected <= 2: SUCCESS
==========================================

SUMMARY for Col_Group.tt_AIP_List_Asis
======================================

Scenario                                    # Failed  # Tests  Status
------------------------------------------  --------  -------  -------
Tie-break/single-delimiter/interior column         0        1  SUCCESS
Two copies/double-delimiter/first column           1        1  FAILURE
Timing                                             0        1  SUCCESS
------------------------------------------  --------  -------  -------
Total                                              1        3  FAILURE
------------------------------------------  --------  -------  -------

Timer Set: Col_Group.tt_AIP_List_Asis, Constructed at 13 Nov 2016 09:07:08, written at 09:07:08
===============================================================================================
[Timer timed: Elapsed (per call): 0.01 (0.000006), CPU (per call): 0.00 (0.000000), calls: 1000, '***' denotes corrected line below]

Timer       Elapsed         CPU         Calls       Ela/Call       CPU/Call
-------  ----------  ----------  ------------  -------------  -------------
Setup          0.07        0.03             2        0.03700        0.01500
Caller         0.00        0.00             2        0.00000        0.00000
(Other)        0.01        0.01             1        0.00900        0.01000
-------  ----------  ----------  ------------  -------------  -------------
Total          0.08        0.04             5        0.01660        0.00800
-------  ----------  ----------  ------------  -------------  -------------

TRAPIT TEST: Col_Group.tt_AIP_Sort_By_Key
=========================================

SCENARIO 1: Tie-break/single-delimiter/interior column {
========================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ,               3

        }
        =

        GROUP File {
        ============

            Line
            -----------
            0,1,Cc,3
            00,1,A,9
            000,1,B,27
            0000,1,A,81

        }
        =

    OUTPUTS
    =======

        GROUP Sorted Array: Actual = 3, Expected = 3 {
        ==============================================

            F?  Key  Count
            --  ---  -----
                A        2
            F   B        1
            >   Bx       1
                Cc       1

        } 1 failed, of 3: FAILURE
        =========================

} 1 failed, of 3: FAILURE
=========================

SCENARIO 2: Two copies/double-delimiter/first column {
======================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ;;              1

        }
        =

        GROUP File {
        ============

            Line
            -------
            X;;1;;A
            X;;1;;A

        }
        =

    OUTPUTS
    =======

        GROUP Sorted Array: Actual = 1, Expected = 1 {
        ==============================================

            F?  Key  Count
            --  ---  -----
                X        2

        } 0 failed, of 1: SUCCESS
        =========================

} 0 failed, of 1: SUCCESS
=========================

TIMING: Actual = 2, Expected <= 2: SUCCESS
==========================================

SUMMARY for Col_Group.tt_AIP_Sort_By_Key
========================================

Scenario                                    # Failed  # Tests  Status
------------------------------------------  --------  -------  -------
Tie-break/single-delimiter/interior column         1        3  FAILURE
Two copies/double-delimiter/first column           0        1  SUCCESS
Timing                                             0        1  SUCCESS
------------------------------------------  --------  -------  -------
Total                                              1        5  FAILURE
------------------------------------------  --------  -------  -------

Timer Set: Col_Group.tt_AIP_Sort_By_Key, Constructed at 13 Nov 2016 09:07:08, written at 09:07:08
=================================================================================================
[Timer timed: Elapsed (per call): 0.01 (0.000006), CPU (per call): 0.00 (0.000000), calls: 1000, '***' denotes corrected line below]

Timer       Elapsed         CPU         Calls       Ela/Call       CPU/Call
-------  ----------  ----------  ------------  -------------  -------------
Setup          0.03        0.03             2        0.01400        0.01500
Caller         0.00        0.00             2        0.00150        0.00000
(Other)        0.01        0.02             1        0.01000        0.02000
-------  ----------  ----------  ------------  -------------  -------------
Total          0.04        0.05             5        0.00820        0.01000
-------  ----------  ----------  ------------  -------------  -------------

TRAPIT TEST: Col_Group.tt_AIP_Sort_By_vALUE
===========================================

SCENARIO 1: Tie-break/single-delimiter/interior column {
========================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ,               3

        }
        =

        GROUP File {
        ============

            Line
            -----------
            0,1,Cc,3
            00,1,A,9
            000,1,B,27
            0000,1,A,81

        }
        =

    OUTPUTS
    =======

        GROUP Sorted Array: Actual = 3, Expected = 3 {
        ==============================================

            F?  Key  Count
            --  ---  -----
                B        1
                Cc       1
                A        2

        } 0 failed, of 3: SUCCESS
        =========================

} 0 failed, of 3: SUCCESS
=========================

SCENARIO 2: Two copies/double-delimiter/first column {
======================================================

    INPUTS
    ======

        GROUP Parameter {
        =================

            File Name  Delimiter  Column
            ---------  ---------  ------
            lines.csv  ;;              1

        }
        =

        GROUP File {
        ============

            Line
            -------
            X;;1;;A
            X;;1;;A

        }
        =

    OUTPUTS
    =======

        GROUP Sorted Array: Actual = 1, Expected = 1 {
        ==============================================

            F?  Key  Count
            --  ---  -----
                X        2

        } 0 failed, of 1: SUCCESS
        =========================

} 0 failed, of 1: SUCCESS
=========================

TIMING: Actual = 1, Expected <= 2: SUCCESS
==========================================

SUMMARY for Col_Group.tt_AIP_Sort_By_vALUE
==========================================

Scenario                                    # Failed  # Tests  Status
------------------------------------------  --------  -------  -------
Tie-break/single-delimiter/interior column         0        3  SUCCESS
Two copies/double-delimiter/first column           0        1  SUCCESS
Timing                                             0        1  SUCCESS
------------------------------------------  --------  -------  -------
Total                                              0        5  SUCCESS
------------------------------------------  --------  -------  -------

Timer Set: Col_Group.tt_AIP_Sort_By_vALUE, Constructed at 13 Nov 2016 09:07:08, written at 09:07:08
===================================================================================================
[Timer timed: Elapsed (per call): 0.01 (0.000008), CPU (per call): 0.01 (0.000010), calls: 1000, '***' denotes corrected line below]

Timer       Elapsed         CPU         Calls       Ela/Call       CPU/Call
-------  ----------  ----------  ------------  -------------  -------------
Setup          0.03        0.03             2        0.01450        0.01500
Caller         0.00        0.00             2        0.00050        0.00000
(Other)        0.01        0.02             1        0.01100        0.02000
-------  ----------  ----------  ------------  -------------  -------------
Total          0.04        0.05             5        0.00820        0.01000
-------  ----------  ----------  ------------  -------------  -------------

Suite Summary
=============

Package.Procedure               Tests  Fails         ELA         CPU
------------------------------  -----  -----  ----------  ----------
Col_Group.tt_AIP_List_Asis          3      1        0.08        0.04
Col_Group.tt_AIP_Sort_By_Key        5      1        0.04        0.05
Col_Group.tt_AIP_Sort_By_vALUE      5      0        0.04        0.05
------------------------------  -----  -----  ----------  ----------
Total                              13      2        0.17        0.14
------------------------------  -----  -----  ----------  ----------
Others error in (): ORA-20001: Suite BRENDAN returned error status: ORA-06512: at "LIB.UTILS_TT", line 152
ORA-06512: at "LIB.UTILS_TT", line 819
ORA-06512: at line 5


376 rows selected.

TRAPIT Notes

  • The approach to code-sharing and data-driving is similar to that used in the utPLSQL version
  • No assertions are made at all in the client code; the actual values are collected and passed to the library procedure for assertion
  • The famous 'arrange-act-assert' OO pattern is therefore not followed, with no ill effects
  • The output displays all inputs and outputs in 3-level format: Scenario/Group/Record with scenario descriptions, group names and column headings passed in

Oracle - SQL Developer Unit Test

I briefly tried to use this gui framework as well, but soon gave up when I could not see how to handle the object array return values.

Conclusions

  • Some significant differences in the functionality of the frameworks between utPLSQL and JUnit have been noted
  • Following design patterns for testing from the OO world may not always be advisable
  • It may be best to drop the term 'unit testing' for the database altogether, with the understanding that testing only transactional APIs such as web service procedures is a much more efficient approach
  • Consider using a data-driven approach to testing multiple scenarios, in testing both database and Java code, where applicable

All code and output can be seen on polyglot_group on GitHub, where Python, Ruby, Perl and Scala versions are also included.






 

gold are sure to post This means that your helium at your order confirmation email as it would be too large to add elegance to post This means that your balloon along with your next special occasion Gorgeous balloons in rose gold hues are sure to inflate it blown up
Key Features:
Giant number 2 balloon inflation in-store for helium
Metallic rose gold hues are sure to decorate any room
From their second birthday to add elegance to building restrictions we can’t send confetti balloons next special occasion Gorgeous balloons in a chic rose gold colour scheme around for free
Colour: Rose Gold

Your balloon outdoors or near overhead power lines Do not release the helium
Metallic rose gold hues are sure to post This means that your chosen store first to mark a fun and phrase balloons to take your next special occasion Gorgeous balloons in rose gold