Create Temporary Table in Oracle

To create a table named test with column col1 type varchar2 length 10, col2 type number. col3 type clob we can use CREATE TABLE statement as,
CREATE TABLE TEST(col1 VARCHAR2(10), col2 NUMBER, col3 CLOB);
Now if I insert data into the table the data is visible and accessible to all users. In many cases it is needed the data inside a table will be reside temporarily. In that case we can use temporary tables. Temporary tables are useful in applications where a result set is to be buffered. To create temporary table we have to issue CREATE GLOBAL TEMPORARY clause.
Temporary table can be of two types based on ON COMMIT clause settings.
1)ON COMMIT DELETE ROWS specifies temporary table would be transaction specific. Data persist within table up to transaction ending time. If you end the transaction the database truncates the table (delete all rows). Suppose if you issue commit or run ddl then data inside the temporary table will be lost. It is by default option.
Example:
(i)This statement creates a temporary table that is transaction specific:
CREATE GLOBAL TEMPORARY TABLE test_temp(col1 number, col2 number) ON COMMIT DELETE ROWS;
Table created.
(ii)Insert row in to the temporary table.
insert into test_temp values(3,7);
1 row created.
(iii)Look at the data in the table.
select * from test_temp;
COL1 COL2
———- ———-
3 7
(iv)Issue Commit.
commit;
Commit complete.
(v)Now look at the data in the temporary table. As I created transaction specific temporary table(on commit delete rows) so data is lost after commit.
SQL> select * from test_temp;
no rows selected
2)ON COMMIT PRESERVE ROWS specifies temporary table would be session specific. Data persist within table up to session ending time. If you end the session the database truncates the table (delete all rows). Suppose you type exit in SQL*Plus then data inside the temporary table will be lost.
Example of Session Specific Temporary Tables:
1)Create Session Specific Temporary Table test_temp2.
CREATE GLOBAL TEMPORARY TABLE test_temp2 (col1 number, col2 number)
ON COMMIT PRESERVE ROWS;
(ii)Insert data into it and look at data both before commit and after commit.
insert into test_temp2 values(3,7);
1 row created.
SQL>select * from test_temp2;
COL1 COL2
———- ———-
3 7
(iii) commit;
Commit Complete
(iv)select * from test_temp2;
COL1 COL2
———- ———-
3 7

(iv)End the Session.
exit;


Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options
(v)Connect in a new session and look at data again.
$ sqlplus apps/[email protected]
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – Production
With the Partitioning, OLAP and Data Mining options

SQL> select * from test_temp2;
no rows selected

This is how Global Temporary Tables are used.
Feature of Temporary Table
1.Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.
2.Unlike permanent tables, temporary tables and their indexes do not automatically allocate a segment when they are created. Instead, segments are allocated when the first INSERT (or CREATE TABLE AS SELECT) is performed. This means that if a SELECT, UPDATE, or DELETE is performed before the first INSERT, the table appears to be empty.
3.DDL operations (except TRUNCATE) are allowed on an existing temporary table only if no session is currently bound to that temporary table.
4.If you rollback a transaction, the data you entered is lost, although the table definition persists.
5.A transaction-specific temporary table allows only one transaction at a time. If there are several autonomous transactions in a single transaction scope, each autonomous transaction can use the table only as soon as the previous one commits.
6.Because the data in a temporary table is, by definition, temporary, backup and recovery of temporary table data is not available in the event of a system failure.
7.It is good to know about that temporary table itself is not temporary, the data within it is temporary.
Restriction of Temporary Table
1.Temporary tables cannot be partitioned, clustered, or index organized.
2.You cannot specify any foreign key constraints on temporary tables.
3.Temporary tables cannot contain columns of nested table.
4.You cannot specify the following clauses of the LOB_storage_clause: TABLESPACE, storage_clause, or logging_clause.
5.Parallel DML and parallel queries are not supported for temporary tables. Parallel hints are ignored. Specification of the parallel_clause returns an error.
6.You cannot specify the segment_attributes_clause, nested_table_col_properties, or parallel_clause.
7.Distributed transactions are not supported for temporary tables.

About Hierarchical Trees
The hierarchical tree displays data in the form of a standard navigator.
We can populate a hierarchical tree with values contained in a Record Group or Query Text.
At runtime, you can programmatically add, remove, modify, or evaluate elements in a hierarchical tree.
The amount of data displayed at any one time depends upon the expansion of individual data nodes.
Triggers
Following Triggers are Provided by Form Builder exclusively for hierarchical tree Items.
1.       When-Tree-Node-Activated
2.       When-Tree-Node-Expanded
3.       When-Tree-Node-Selected
Built-ins
Following built-in subprograms are used to add, remove, modify, or evaluate elements in a hierarchical tree Items.
All built-ins are located in the FTREE built-in package.
1.       Add_Tree_Data
2.       Add_Tree_Node
3.       Delete_Tree_Node
4.       Find_Tree_Node
5.       Get_Tree_Node_Parent
6.       Get_Tree_Selection
7.       Populate_Group_From_Tree
8.       Populate_Tree
9.       Set_Tree_Selection
Now, we will see, how to Develop a form Using Hierarchical Tree Item for Oracle Application.
Objectives :
1.       Create the Hierarchical Tree Item in a Non-Database Block.
2.       Populate the Data to the Hierarchical Tree Item Using Record Group at Runtime.
3.       Create a Database Block to Display the Data(s) based on the Element Navigating on the Hierarchical Tree Item.
Database Objects Required to Develop this Form :
 
Find the Attachment contains the Script to Create Database Objects & Sample Data’s to be Inserted.
In the Canvas Tool Palette we can I find the Hierarchical Tree.
Note : Hierarchical Tree Item Should be in a separate Block as a Individual Item.
There are 2 ways to populate a hierarchical tree :
1.       Record Group
2.       Query Text
The record Group query should be in the specified structure., which requires 5 Columns.
SELECT STATUS, LEVEL, LABEL, ICON, VALUE FROM TABLE;
STATUS à Indicates the initiate status of the Node (Normally Value is 1).
LEVEL  à This is a specific pseudo-column Derived from “CONNECT BY”.
LABEL  à This is the visible label of the Node.
ICON   à That contains the icon name of the Node (can be NULL).
VALUE  à That contains the value of the Node.
Below is the Query used in the Record Group to Populate the Data in the Hierarchical Tree Item.
SELECT 1, LEVEL, ENAME, NULL, TO_CHAR(EMPNO) APPS.EMP CONNECT BY PRIOR EMP.EMPNO = EMP.MGR;
Object Navigator :
 
 
Canvas Design :
 
 
At Run Time :

Kindly have a look on the Below Triggers :

1. WHEN-NEW-FORM-INSTANCE (Form Level)
2. WHEN-TREE-NODE-SELECTED (Item Level)

I hope this will be Useful for custom Developments.

In implementing new systems we come across problems of importing “alien” data. This may be coming from a legacy system or an on-going system. This data is transported via extract files from the legacy system to the Oracle system. The gateway to Oracle for this data is SQL*Loader and data is loaded into tables via a control script into tables.

Typically, the older systems do not have very normalized data, nor have they been operating with fully implemented database constraints. The lack of constraints over the years in legacy system can lead to bad data that has crept in. Therefore, while bringing external data into oracle system we need a refined set of checks and balances to ensure that we get good data. This requires a lot of programmatic control in the process of data-loading.

The approach applied in case of SQL* Loader is as follows :
1. Load the data into temporary tables via SQL*Loader via control file and make the data native to ORACLE.
2. Write a PL/SQL program to do the processing.
3. Load the data into live tables.

This approach has a lot of dependencies as well as a strong lack of integration of steps and programmatic control. To overcome this, we have analyzed another facility in that has been release Oracle 7.3.x onwards. It is called the UTL_FILE package. With some creative use of this package we can achieve whatever  SQL*LOADER offers and in addition to that do some high level validation and complex data loading. In the following discussion a study of two tools is done.

A BRIEF OVERVIEW OF SQL*Loader:
SQL*Loader is a server utility for loading data from external data files into Oracle database. The basic advantage of using SQL*Loader is for simple loads and fast loading of data. It can load data into myriad data formats, perform elementary filtering, load data into multiple tables, and create one logical record from one or more physical records.

It creates a detailed log file, a bad file that contains rejected records and a discard file to hold the records that are selectively not loaded. The tool is executed from a command line and a username and password and the control file name and location are required to run it.

A BRIEF OVERVIEW OF UTL_FILE:
PL/SQL does not have text file input output capabilities but acquires it via UTL_FILE package. It provides rudimentary utility for reading ( as well as writing) files from within a PL/SQL program. The lines in the file are read sequentially and hence it effects the performance of the program.

The UTL_FILE package can be wrapped around with a PL/SQL program and since this package is integrated with PL/SQL it provides us the tremendous ability for flexing our “programming muscles.” Some procedures and functions can be added to this wrapper program that serve as a handy “tool” for doing normal file reading operations. With this approach we can achieve whatever SQL*Loader can do and much more. The security mechanism for UTL_FILE is achieved by defining a parameter in INIT.ora file called utl_file_dir parameter. The directories that UTL_FILE can read from and write to need to have permissions of Oracle
instance owner and the user running the package.

CONCLUSIONS:
The comparative analysis of SQL*Loader and UTL_FILE reveals that these tools are suitable to our environment subject to the conditions of our needs.

If the data load is complex (as is the case in relational databases) UTL_FILE seems to be the tool of choice. This tool does require programmatic effort in terms of writing a wrapper package but the subsequent effort in this direction is greatly reduced once the initial tool kit is built for your environment.

The UTL_FILE tightly integrates the file input with the programmatic control and the data manipulation inside a single PL/SQL unit. There are disadvantages of speed in loading in case of UTL_FILE but these are totally offset by the programmatic control it offers and the integration it brings in.

Thus we find that UTL_FILE tool bridges the gap left by SQL*Loader for complex data loads.

Here we will discuss how to insert BLOB file in the database. For this we will create a table and then a procedure that will be used to insert records in the table.

Use following script to create an employee table

CREATE TABLE SV_EMP_PHOTO
(
  ID          NUMBER(3) NOT NULL,
  PHOTO_NAME  VARCHAR2(40),
  PHOTO_RAW   BLOB,
  EMP_NAME    VARCHAR2(80)
)

Create a directory where the photos will be stored. I am creating a directory in UNIX as our database is created in UNIX.

Create directory SV_PHOTO_DIR as ‘/u002/app/applmgr/empphoto’

Script to create a procedure SV_LOAD_IMAGE that will insert records in the table.

CREATE OR REPLACE PROCEDURE sv_load_image (
   p_id                NUMBER
 , p_emp_name     IN   VARCHAR2
 , p_photo_name   IN   VARCHAR2
)
IS
   l_source   BFILE;
   l_dest     BLOB;
   l_length   BINARY_INTEGER;
BEGIN
   l_source := BFILENAME (‘SV_PHOTO_DIR’, p_photo_name);

   INSERT INTO sv_emp_photo
               (ID
              , photo_name
              , emp_name
              , photo_raw
               )
   VALUES      (p_id
              , p_photo_name
              , p_emp_name
              , EMPTY_BLOB ()
               )
   RETURNING   photo_raw
   INTO        l_dest;

   — lock record
   SELECT     photo_raw
   INTO       l_dest
   FROM       sv_emp_photo
   WHERE      ID = p_id AND photo_name = p_photo_name
   FOR UPDATE;

   — open the file
   DBMS_LOB.fileopen (l_source, DBMS_LOB.file_readonly);
   — get length
   l_length := DBMS_LOB.getlength (l_source);
   — read the file and store in the destination
   DBMS_LOB.loadfromfile (l_dest, l_source, l_length);

   — update the blob field with destination
   UPDATE sv_emp_photo
   SET photo_raw = l_dest
   WHERE  ID = p_id AND photo_name = p_photo_name;

   — close file
   DBMS_LOB.fileclose (l_source);
END –sv_load_image;
/
I have copied few .jpg images in /u002/app/applmgr/empphoto in UNIX.
Execute the procedure as follows to create record in database

exec sv_load_image(1,’Pavki’,’one.jpg’)
exec sv_load_image(2,’Suresh’,’two.jpg’)
exec sv_load_image(3,’Rachna’,’three.jpg’)

From your command prompt, type
sqlplus “/ as sysdba”

Once logged in as SYSDBA, you need to unlock the scott account
SQL> alter user scott account unlock;
SQL> grant connect, resource to scott;