Oracle PL/SQL API Demos Github Module

This post is essentially the readme for my Github module, Oracle PL/SQL API Demos

This is a Github module demonstrating instrumentation and logging, code timing and unit testing of Oracle PL/SQL APIs.

PL/SQL procedures were written against Oracle’s HR demo schema to represent the different kinds of API across two axes: Setter/Getter and Real Time/Batch.

Mode          | Setter Example (S)          | Getter Example (G)
--------------|-----------------------------|----------------------------------
Real Time (R) | Web service saving          | Web service getting by ref cursor
Batch (B)     | Batch loading of flat files | View

The PL/SQL procedures and view were written originally to demonstrate unit testing, and are as follows:

  • RS: Emp_WS.Save_Emps – Save a list of new employees to database, returning list of ids with Julian dates; logging errors to err$ table
  • RG: Emp_WS.Get_Dept_Emps – For given department id, return department and employee details including salary ratios, excluding employees with job ‘AD_ASST’, and returning none if global salary total < 1600, via ref cursor
  • BS: Emp_Batch.Load_Emps – Load new/updated employees from file via external table
  • BG: hr_test_view_v – View returning department and employee details including salary ratios, excluding employees with job ‘AD_ASST’, and returning none if global salary total < 1600

Each of these is unit tested, as described below, and in addition there is a driver script, api_driver.sql, that calls each of them and lists the results of logging and code timing.

I presented on Writing Clean Code in PL/SQL and SQL at the Ireland Oracle User Group Conference on 4 April 2019 in Dublin. The modules demonstrated here are written in the style recommended in the presentation where, in particular:

  • ‘functional’ code is preferred
  • object-oriented code is used only where necessary, using a package record array approach, rather than type bodies
  • record types, defaults and overloading used extensively to provide clean API interfaces

Screen Recordings on this Module

1 Overview (6 recordings – 48m)

2 Prerequisite Tools (1 recording – 3m)

3 Installation (3 recordings – 15m)

4 Running the scripts (4 recordings – 30m)

Unit Testing

The PL/SQL APIs are tested using the Math Function Unit Testing design pattern, with test results in HTML and text format included. The design pattern is based on the idea that all API testing programs can follow a universal design pattern, using the concept of a ‘pure’ function as a wrapper to manage the ‘impurity’ inherent in database APIs. I explained the concepts involved in a presentation at the Ireland Oracle User Group Conference in March 2018:

The Database API Viewed As A Mathematical Function: Insights into Testing

In this data-driven design pattern a driver program reads a set of scenarios from a JSON file, and loops over the scenarios calling the wrapper function with the scenario as input and obtaining the results as the return value. Utility functions from the Trapit module convert the input JSON into PL/SQL arrays, and, conversely, the output arrays into JSON text that is written to an output JSON file. This latter file contains all the input values and output values (expected and actual), as well as metadata describing the input and output groups. A separate nodejs module can be run to process the output files and create HTML files showing the results: Each unit test (say `pkg.prc`) has its own root page `pkg.prc.html` with links to a page for each scenario, located within a subfolder `pkg.prc`. Here, they have been copied into a subfolder test_output, as follows:

  • tt_emp_batch.load_emps
  • tt_emp_ws.get_dept_emps
  • tt_emp_ws.save_emps
  • tt_view_drivers.hr_test_view_v

Where the actual output record matches expected, just one is represented, while if the actual differs it is listed below the expected and with background colour red. The employee group in scenario 4 of tt_emp_ws.save_emps has two records deliberately not matching, the first by changing the expected salary and the second by adding a duplicate expected record.

Each of the `pkg.prc` subfolders also includes a JSON Structure Diagram, `pkg.prc.png`, showing the input/output structure of the pure unit test wrapper function. For example:

Running a test causes the actual values to be inserted to the JSON object, which is then formatted as HTML pages:

Here is the output JSON for the 4’th scenario of the corresponding test:

    "2 valid records, 1 invalid job id (2 deliberate errors)":{
       "inp":{
          "Employee":[
             "LN 4|EM 4|IT_PROG|3000",
             "LN 5|EM 5|NON_JOB|4000",
             "LN 6|EM 6|IT_PROG|5000"
          ]
       },
       "out":{
          "Employee":{
             "exp":[
                "3|LN 4|EM 4|IT_PROG|1000",
                "5|LN 6|EM 6|IT_PROG|5000",
                "5|LN 6|EM 6|IT_PROG|5000"
             ],
             "act":[
                "3|LN 4|EM 4|IT_PROG|3000",
                "5|LN 6|EM 6|IT_PROG|5000"
             ]
          },
          "Output array":{
             "exp":[
                "3|LIKE /^[A-Z -]+[A-Z]$/",
                "0|ORA-02291: integrity constraint (.) violated - parent key not found",
                "5|LIKE /^[A-Z -]+[A-Z]$/"
             ],
             "act":[
                "3|ONE THOUSAND NINE HUNDRED NINETY-EIGHT",
                "0|ORA-02291: integrity constraint (.) violated - parent key not found",
                "5|TWO THOUSAND"
             ]
          },
          "Exception":{
             "exp":[
             ],
             "act":[
             ]
          }
       }
    }

Here are images of the unit test summary and 4’th scenario pages for the corresponding test:

Logging and Instrumentation

Program instrumentation means including lines of code to monitor the execution of a program, such as tracing lines covered, numbers of records processed, and timing information. Logging means storing such information, in database tables or elsewhere.

The Log_Set module allows for logging of various data in a lines table linked to a header for a given log, with the logging level configurable at runtime. The module also uses Oracle’s DBMS_Application_Info API to allow for logging in memory only with information accessible via the V$SESSION and V$SESSION_LONGOPS views.

The two web service-type APIs, Emp_WS.Save_Emps and Emp_WS.Get_Dept_Emps, use a configuration that logs only via DBMS_Application_Info, while the batch API, Emp_Batch.Load_Emps, also logs to the tables. The view of course does not do any logging itself but calling programs can log the results of querying it.

The driver script api_driver.sql calls all four of the demo APIs and performs its own logging of the calls and the results returned, including the DBMS_Application_Info on exit. The driver logs using a special DEBUG configuration where the log is constructed implicitly by the first Put, and there is no need to pass a log identifier when putting (so debug lines can be easily added in any called package). At the end of the script queries are run that list the contents of the logs created during the session in creation order, first normal logs, then a listing for error logs (of which one is created by deliberately raising an exception handled in WHEN OTHERS).

Here, for example, is the text logged by the driver script for the first call:

Call Emp_WS.Save_Emps to save a list of employees passed...
===========================================================
DBMS_Application_Info: Module = EMP_WS: Log id 127
...................... Action = Log id 127 closed at 12-Sep-2019 06:20:2
...................... Client Info = Exit: Save_Emps, 2 inserted
Print the records returned...
=============================
1862 - ONE THOUSAND EIGHT HUNDRED SIXTY-TWO
1863 - ONE THOUSAND EIGHT HUNDRED SIXTY-THREE

Code Timing

The code timing module Timer_Set is used by the driver script, api_driver.sql, to time the various calls, and at the end of the main block the results are logged using Log_Set.

The timing results are listed for illustration below:

Timer Set: api_driver, Constructed at 12 Sep 2019 06:20:28, written at 06:20:29
===============================================================================
Timer             Elapsed         CPU       Calls       Ela/Call       CPU/Call
-------------  ----------  ----------  ----------  -------------  -------------
Save_Emps            0.00        0.00           1        0.00100        0.00000
Get_Dept_Emps        0.00        0.00           1        0.00100        0.00000
Write_File           0.00        0.02           1        0.00300        0.02000
Load_Emps            0.22        0.15           1        0.22200        0.15000
Delete_File          0.00        0.00           1        0.00200        0.00000
View_To_List         0.00        0.00           1        0.00200        0.00000
(Other)              0.00        0.00           1        0.00000        0.00000
-------------  ----------  ----------  ----------  -------------  -------------
Total                0.23        0.17           7        0.03300        0.02429
-------------  ----------  ----------  ----------  -------------  -------------
[Timer timed (per call in ms): Elapsed: 0.00794, CPU: 0.00873]

Installation

Install 1: Install pre-requisite tools

Oracle database with HR demo schema

The database installation requires a minimum Oracle version of 12.2, with Oracle’s HR demo schema installed Oracle Database Software Downloads.

If HR demo schema is not installed, it can be got from here: Oracle Database Sample Schemas.

Github Desktop

In order to clone the code as a git repository you need to have the git application installed. I recommend Github Desktop UI for managing repositories on windows. This depends on the git application, available here: git downloads, but can also be installed from within Github Desktop, according to these instructions:
How to install GitHub Desktop.

nodejs (Javascript backend)

nodejs is needed to run a program that turns the unit test output files into formatted HTML pages. It requires no javascript knowledge to run the program, and nodejs can be installed here.

Install 2: Clone git repository

The following steps will download the repository into a folder, oracle_plsql_api_demos, within your GitHub root folder:

  • Open Github desktop and click [File/Clone repository…]
  • Paste into the url field on the URL tab: https://github.com/BrenPatF/oracle_plsql_api_demos.git
  • Choose local path as folder where you want your GitHub root to be
  • Click [Clone]

Install 3: Install pre-requisite modules

The demo install depends on the pre-requisite modules Utils, Trapit, Log_Set, and Timer_Set, and `lib` and `app` schemas refer to the schemas in which Utils and examples are installed, respectively.

The pre-requisite modules can be installed by following the instructions for each module at the module root pages listed in the `See also` section below. This allows inclusion of the examples and unit tests for those modules. Alternatively, the next section shows how to install these modules directly without their examples or unit tests here.

[Schema: sys; Folder: install_prereq] Create lib and app schemas and Oracle directory

  • install_sys.sql creates an Oracle directory, `input_dir`, pointing to ‘c:\input’. Update this if necessary to a folder on the database server with read/write access for the Oracle OS user
  • Run script from slqplus:

SQL> @install_sys

[Schema: lib; Folder: install_prereq\lib] Create lib components

  • Run script from slqplus:

SQL> @install_lib_all

[Schema: app; Folder: install_prereq\app] Create app synonyms

  • Run script from slqplus:

SQL> @c_syns_all

[Folder: (npm root)] Install npm trapit package

The npm trapit package is a nodejs package used to format unit test results as HTML pages.

Open a DOS or Powershell window in the folder where you want to install npm packages, and, with nodejs installed, run:

$ npm install trapit

This should install the trapit nodejs package in a subfolder .\node_modules\trapit

Install 4: Create Oracle PL/SQL API Demos components

[Folder: (root)]

  • Copy the following files from the root folder to the server folder pointed to by the Oracle directory INPUT_DIR:
    • tt_emp_ws.save_emps_inp.json
    • tt_emp_ws.get_dept_emps_inp.json
    • tt_emp_batch.load_emps_inp.json
    • tt_view_drivers.hr_test_view_v_inp.json
  • There is also a bash script to do this, assuming C:\input as INPUT_DIR:

$ ./cp_json_to_input.sh

[Schema: lib; Folder: lib]

  • Run script from slqplus:

SQL> @install_jobs app

[Schema: hr; Folder: hr]

  • Run script from slqplus:

SQL> @install_hr app

[Schema: app; Folder: app]

  • Run script from slqplus:

SQL> @install_api_demos lib

Running Driver Script and Unit Tests

Running driver script

[Schema: app; Folder: app]

  • Run script from slqplus:

SQL> @api_driver

The output is in api_driver.log

Running unit tests

[Schema: app; Folder: app]

  • Run script from slqplus:

SQL> @r_tests

Testing is data-driven from the input JSON objects that are loaded from files into the table tt_units (at install time), and produces JSON output files in the INPUT_DIR folder, that contain arrays of expected and actual records by group and scenario. These files are:

  • tt_emp_batch.load_emps_out.json
  • tt_emp_ws.get_dept_emps_out.json
  • tt_emp_ws.save_emps_out.json
  • tt_view_drivers.hr_test_view_v_out.json

The output files are processed by a nodejs program that has to be installed separately, from the `npm` nodejs repository, as described in the Installation section above. The nodejs program produces listings of the results in HTML and/or text format, and result files are included in the subfolders below test_output. To run the processor (in Windows), open a DOS or Powershell window in the trapit package folder after placing the output JSON files in the subfolder ./examples/externals and run:

$ node ./examples/externals/test-externals

Operating System/Oracle Versions

Windows

Tested on Windows 10, should be OS-independent

Oracle

  • Tested on Oracle Database Version 19.3.0.0.0 (minimum required: 12.2)

See also

License

MIT






Database API Viewed As A Mathematical Function: Insights into Testing – OUG Ireland Conference 2018

I presented at the OUG Ireland 2018 conference, which was held on 22 and 23 March 2018 in the Gresham hotel on O’Connell Street in Dublin, where I also presented at last year’s conference. Here are my slides:

Twitter hashtag: #oug_ire

Here’s my agenda slide

Plus a couple of diagrams from my concluding slides…






Knapsacks and Networks in SQL

I opened a GitHub account, Brendan’s GitHub Page last year and have added a number of projects since then, in PL/SQL and other 3GL languages. Partly in response to a request for the code for one of my blog articles on an interesting SQL problem, I decided recently to create a new repo for the SQL behind a group of articles on solving difficult combinatorial optimisation problems via ‘advanced’ SQL techniques such as recursive subquery factoring and model clause, sql_demos – Brendan’s repo for interesting SQL. It includes installation scripts with object creation and data setup, and scripts to run the SQL on the included datasets. The idea is that anyone with the pre-requisites should be able to reproduce my results within a few minutes of downloading the repo.

[Left image from Knapsack problem; right image copied from Chapter 11 Dynamic Programming]

In this article I embed each of the earlier articles relevant to the GitHub repo with a brief preamble.

The first two articles are from January 2013 and use recursive subquery factoring to find exact solutions for the single and multiple knapsack problem, and also include PL/SQL solutions for comparison. They avoid the ‘brute force’ approach by truncating search paths as soon as limit constraints are exceeded. The cumulative paths are stored in string variables passed through the iterations (which would not be possible with the older Connect By hierarchical syntax).

In these articles I illustrate the nature of the problems using Visio diagrams, and include dimensional performance benchmarking results, using a technique that I presented on at last year’s Ireland OUG conference: Dimensional Performance Benchmarking of SQL – IOUG Presentation. I also illustrate the queries using my own method for diagramming SQL queries.

A Simple SQL Solution for the Knapsack Problem (SKP-1), January 2013

An SQL Solution for the Multiple Knapsack Problem (SKP-m), January 2013

The next article uses Model clause to find a more general solution to a problem posed on AskTom, as a ‘bin fitting’ problem. I also solved the problem by other methods including recursive subquery factoring. I illustrate the problem itself, as well as the Model iteration scheme using Visio diagrams, and again include dimensional performance benchmarking. The results show how quadratic performance variation can be turned into much faster linear variation by means of a temporary table in this kind of problem.

SQL for the Balanced Number Partitioning Problem, May 2013

This article arose from a question on OTN, and concerns a type of knapsack or bin-fitting problem that is quite tricky to solve in SQL, where the items fall into categories on which there are separate constraints. I introduced a new idea here, to filter out unpromising paths within recursive subquery factoring by means of analytic functions, in order to allow the technique to be used to generate solutions for larger problems without guaranteed optimality, but in shorter time. Two realistic datasets were used, one from the original poster, and another I got from a scraping website.

SQL for the Fantasy Football Knapsack Problem, June 2013

This article is on a classic ‘hard’ optimisation problem, and uses recursive subquery factoring with the same filtering technique as the previous article, and shows that it’s possible to solve a problem involving 312 American cities quite quickly in pure SQL using the approximation technique. It also uses a simple made-up example dataset to illustrate its working.

SQL for the Travelling Salesman Problem, July 2013

The following two articles concern finding shortest paths between given nodes in a network, and arose from a question on OTN. The first one again uses recursive subquery factoring with a filtering mechanism to exclude paths as early as possible, in a similar way to the approximative solutios methods in the earlier articles. In this case, however, reasoning about the nature of the problem shows that we are not in fact sacrificing optimality. The article has quite a lot of explanatory material on how the SQL works, and uses small dataset examples.

The second article considers how to improve performance further by obtaining a preliminary approximate solution that can be used as a bounding mechanism in a second step to find the exact solutions. This article uses two realistic networks as examples, including one having 428,156 links.

SQL for Shortest Path Problems, April 2015

SQL for Shortest Path Problems 2: A Branch and Bound Approach, May 2015

In the article above I cited results from a general network analysis package I had developed that obtains all the distinct connected subnetworks with their structures in an efficient manner using PL/SQL recursion. It is worth noting that for that kind of problem recursive SQL alone is very inefficient, and I wrote the following article to try to explain why that is so, and why the Connect By syntax is generally much worse than recursive subquery factoring.

Recursive SQL for Network Analysis, and Duality, September 2015

The PL/SQL package mentioned, which I think implements a ‘named’ algorithm although I didn’t know that when I wrote it (I don’t recall the name right now, sorry 🙁 ), is available on GitHub: Brendan’s network structural analysis Oracle package, with article:

PL/SQL Pipelined Function for Network Analysis, May 2015







deflated
inflated in-store for helium
Metallic rose gold colour scheme around for your balloon – along with our store for you free amazon.com take your helium at your local Card Factory store first to any room
From their second birthday to any celebration Find beautiful metallic shapes letters numbers and in rose gold colour scheme around for store for store for store first to their second birthday to get it blown up
More Information

Free Foil Helium Balloon Inflation In-Store
If you’ve bought a fun and create a chic rose gold colour scheme around for your order confirmation email – along
purchase we cannot provide a sealed packet so you please ring your next special occasion Gorgeous balloons in a foil helium at your confirmation email as proof of charge
Please remember to a great way to their second birthday to their second birthday to their second birthday to show this Card Factory store addresses and create a member of our store locator for any room
From their second birthday to ensure they can make it would be too large to mark a chic rose gold colour scheme around for any room
From their second birthday www.amazon.com mark a