Oracle® TimesTen In-Memory Database SQL Reference 11g Release 2 (11.2.2) E21642-07 |
|
|
PDF · Mobi · ePub |
This chapter provides information about the SQL statements available in TimesTen.
SQL statements are generally considered to be either Data Manipulation Language (DML) statements or Data Definition Language (DDL) statements.
DML statements modify database objects. INSERT
, UPDATE
and DELETE
are examples of DML statements.
DDL statements modify the database schema. CREATE TABLE
and DROP TABLE
are examples of DDL statements.
A comment can appear between keywords, parameters, or punctuation marks in a statement. You can include a comment in a statement in two ways:
Begin the comment with a slash and an asterisk (/*
). Proceed with the text of the comment. The text can span multiple lines. End the comment with an asterisk and a slash. (*/
). You do not need to separate the opening and terminating characters from the text by a space or line break.
Begin the comment with two hyphens (--
). Proceed with the text of the comment. The text cannot extend to a new line. End the comment with a line break.
Statement level optimizer hints are comments in a SQL statement that pass instructions to the TimesTen query optimizer. The optimizer considers these hints when choosing the best execution plan for your query.
Topics include:
A SQL statement can have one comment that includes one or more statement level optimizer hints. SQL statements that support statement level optimizer hints are DELETE
, INSERT
, MERGE
, SELECT
, UPDATE
, INSERT...SELECT
, and CREATE
TABLE...AS SELECT
. The hint must follow the DELETE
, INSERT
, MERGE
, SELECT
, or UPDATE
keyword. (A DELETE
, INSERT
, MERGE
, SELECT
, or UPDATE
keyword is also known as a SQL
VERB
.) Table 6-1, "Placement of statement level hints in SQL statements" shows the proper placement of hints in a SQL statement.
You embed statement level optimizer hints in comment syntax. TimesTen supports hints in comments that span one line and in comments that span more than one line. If your comment that contains the hint spans one or more lines, use the comment syntax, /*+...*/
. If your comment that contains the hint spans one line, use the comment syntax, --+
.
Syntax:
SQL VERB {/*+ [CommentText] hint [{hint|CommentText} [...]] */ | --+ [CommentText] hint [{hint|CommentText} [...]] } hint::= FlagHint | JoinOrderHint | IndexHint FlagHint::= FlagName (0|1) JoinOrderHint::= TT_JoinOrder (CorrelationName CorrelationName [...]) IndexHint::= TT_Index (CorrelationName,IndexName,{0|1} [;...]) FlagName::= TT_BranchAndBound|TT_DynamicLoadEnable|TT_DynamicLoadErrorMode| TT_FirstRow|TT_ForceCompile|TT_GenPlan|TT_GlobalLocalJoin| TT_GlobalProcessing|TT_HashGb|TT_HashScan|TT_IndexedOr|TT_MergeJoin| TT_NestedLoop|TT_NoRemRowIdOpt|TT_Range|TT_Rowid|TT_RowLock| TT_ShowJoinOrder|TT_TblLock|TT_TblScan|TT_TmpHash|TT_TmpRange| TT_TmpTable|TT_UseBoyerMooreStringSearch
Parameter | Description |
---|---|
SQL VERB |
SQL VERB refers to one of the keywords: DELETE , INSERT , MERGE , SELECT , or UPDATE . You embed a statement level optimizer hint in comment syntax and if the comment syntax contains a statement level optimizer hint, the comment syntax must follow the SQL VERB . |
/*+ hint */ |
One or more hints that are embedded in comment syntax. The comment syntax can span one or more lines. The plus sign (+ ) denotes the start of a statement level optimizer hint.
Make sure there is no space between the star ( |
--+ hint |
One or more hints that are embedded in comment syntax. The comment syntax can only span one line. The plus sign (+ ) denotes the start of a statement level optimizer hint.
Make sure there is no space between the dash ( |
hint |
A statement level optimizer hint. A SQL statement supports one or more statement level optimizer hints as one comment string. So for one SQL statement, you can specify one comment that contains one or more hints and that comment must follow a DELETE , INSERT , MERGE , SELECT , or UPDATE keyword.
If you specify more than one hint within the comment, make sure there is a space between the hints. Statement level optimizer hints are scoped to a SQL statement and have per query semantics. The name and type of statement level optimizer hints map to the transaction level optimizer hints. Transaction level optimizer hints are invoked by calling the built-in procedures Transaction level hints are scoped to the transaction and have transaction semantics. You must set autocommit to 0 and the transaction level optimizer hint is in effect for the duration of your transaction. For more information, see "ttOptSetFlag," "ttOptSetOrder" or "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference. |
CommentText |
Text within a comment string. You can use both statement level optimizer hints and commenting text within one comment. Make sure to include a space between the hint and the commenting text. |
FlagHint |
FlagHint refers to statement level optimizer flags that you enable or disable to influence the execution plan of the TimesTen query optimizer. These flags map to the flags used in the ttOptSetFlag built-in procedure.
Statement level optimizer hint flags are in effect for the statement only whereas transaction level optimizer hint flags are in effect for the duration of your transaction. |
FlagName ( 0|1 ) |
FlagName refers to the name of the statement level optimizer hint flag.
Specify either 0 or 1 for the value of the flag. A value of 1 enables the flag. A value of 0 disables the flag. Enclose the value in parentheses, (). For example, to set the statement level nested loop flag in Command> SELECT /*+ TT_NestedLoop (1) > */ ... To set the transaction level nested loop flag in Command> AUTOCOMMIT OFF; Command> CALL > ttOptSetFlag('NestedLoop',1); |
TT_BranchAndBound |
Flag that maps to the flag BranchAndBound in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_DynamicLoadEnable |
Flag that maps to the flag DynamicLoadEnable in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_DynamicLoadErrorMode |
Flag that maps to the flag DynamicLoadErrorMode in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_FirstRow |
Flag that maps to the flag FirstRow in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_ForceCompile |
Flag that maps to the flag ForceCompile in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_GenPlan |
Flag that maps to the flag GenPlan in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_GlobalLocalJoin |
Flag that maps to the flag GlobalLocalJoin in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_GlobalProcessing |
Flag that maps to the flag GlobalProcessing in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_HashGb |
Flag that maps to the flag HashGb in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_HashScan |
Flag that maps to the flag Hash in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_IndexedOr |
Flag that maps to the flag IndexedOr in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_MergeJoin |
Flag that maps to the flag MergeJoin in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_NestedLoop |
Flag that maps to the flag NestedLoop in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_NoRemRowIdOpt |
Flag that maps to the flag NoRemRowIdOpt in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_Range |
Flag that maps to the flag Range in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_Rowid |
Flag that maps to the flag Rowid in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_RowLock |
Flag that maps to the flag Rowlock in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_ShowJoinOrder |
Flag that maps to the flag ShowJoinOrder in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_TblLock |
Flag that maps to the flag TblLock in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_TblScan |
Flag that maps to the flag Scan in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_TmpHash |
Flag that maps to the flag TmpHash in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_TmpRange |
Flag that maps to the flag TmpRange in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_TmpTable |
Flag that maps to the flag TmpTable in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
TT_UseBoyerMooreStringSearch |
Flag that maps to the flag UseBoyerMooreStringSearch in the ttOptSetFlag built-in procedure.
For a description of this flag, see "ttOptSetFlag" in the Oracle TimesTen In-Memory Database Reference. |
JoinOrderHint ::= TT_JoinOrder ( CorrelationName CorrelationName [...] ) |
JoinOrderHint refers to the syntax for the TT_JoinOrder statement level optimizer hint. The TT_JoinOrder hint instructs the optimizer to join your tables in a specified order. The join order is in effect for the statement only.
Specify
For example, if you are joining the Command> SELECT /*+ TT_JoinOrder > (EMPS DEPTS)*/... If your You can execute the built-in procedure, For more information on |
IndexHint ::= TT_INDEX (CorrelationName IndexName {0 |1 } [; ...] ) |
IndexHint refers to the syntax for the TT_INDEX statement level optimizer hint. Use the TT_INDEX hint to direct the optimizer to use or not use an index for your table. The index hint is in effect for the statement only.
Specify a value of 0 to ask the optimizer not to consider the index. Specify a value of 1 to ask the optimizer to consider the index. For example, To direct the optimizer to use the index Command> SELECT /*+ TT_INDEX > (E,EMP_NAME_IX,1) */ ... Use a semicolon (;) to include more than one If your You can execute the built-in procedure, For more information on |
Embed statement level optimizer hints in comment syntax. Begin the comment with either /*
or --
. Follow the beginning comment syntax with a plus sign (+
). The plus sign (+
) signals TimesTen to interpret the comment as a list of hints. The plus sign (+) must follow immediately after the comment delimiter. (For example, after /*
or after --
). No space is permitted between the comment delimiter and the plus sign (+).
In the following example, there is a space between the star (*) and the plus sign (+), so the hint is ignored:
Command> SELECT /* + TT_TblScan (1) This hint is ignored because there is a > space between the star (*) and the plus (+) sign. */ ...
hint
is one of the statement level optimizer hints supported by TimesTen. There can be a space between the plus sign (+) and the hint. If the comment contains multiple hints, separate the hints by at least one space. For example, to specify two hints on one line:
Command> SELECT --+ TT_MergeJoin (0) TT_NestedLoop (1) > ...
You can intersperse commenting text with hints in a comment. For example,
Command> SELECT /*+ TT_HashScan (1) This demonstrates a hint followed by a > comment string. */ ...
TimesTen ignores hints and does not return an error if:
Your hint does not follow the DELETE
, INSERT
, MERGE
, SELECT
or UPDATE
keyword.
Your hint contains misspellings or syntax errors. If you have hints that are within the same comment and some hints are correct syntactically and some hints are incorrect syntactically, TimesTen ignores the incorrect hints and accepts the correct hints.
If you use either the TT_JoinOrder
or TT_Index
hint and you do not supply a closing parenthesis, the remainder of the hint string is ignored.
For hints that conflict with each other, TimesTen uses the rightmost hint in the comment. For example, if the comment string is /*+TT_TblScan (0)...TT_TblScan (1) */
, the rightmost hint, TT_TblScan(1)
, is used.
Statement level optimizer hints override conflicting transaction level optimizer hints. So if you specify a transaction level optimizer hint that conflicts with a statement level optimizer hint, the statement level optimizer hint overrides the conflicting transaction level optimizer hint. For example, if you call ttOptSetFlag
, and enable the Range
flag and then you issue a SQL query and disable the statement level optimizer flag, TT_Range
, TimesTen disables the range flag for the query. After the query is executed, the original range flag setting that was in place in the transaction before the query was executed remains in effect for the duration of the transaction. For more information, see Example 6-1, "Using statement level optimizer hints for a SELECT query".
Do not use statement level optimizer hints in a subquery.
The TimesTen query optimizer does not recognize statement level optimizer hints for passthrough statements. TimesTen passes the SQL text for passthrough statements to the Oracle Database and the SQL text is processed according to the SQL rules of the Oracle Database.
You can specify statement level optimizer hints in DELETE
, INSERT
, MERGE
, SELECT
, and UPDATE
statements. You can also specify statement level optimizer hints in the CREATE
TABLE
...AS
SELECT
statement and in the INSERT... SELECT
statement. You must specify the hint within comment syntax and the comment syntax must immediately follow the SQL
VERB
. (For example, DELETE
/*+
hint
*
/...
). Table 6-1 shows the correct placement of the statement level hint.
Table 6-1 Placement of statement level hints in SQL statements
SQL statement | Placement of hint |
---|---|
|
Do not use transaction level hints with the |
|
|
|
|
|
|
|
|
|
Note: Do not specify a hint in a subquery. |
|
|
|
|
Use optimizer hints to influence the TimesTen query optimizer in determining the choice of the execution plan for your query.
To view transaction level optimizer hints, execute the built-in procedure, ttOptSetFlag
. For more information on the built-in procedure, ttOptGetFlag
, see "ttOptGetFlag" in Oracle TimesTen In-Memory Database Reference.
Table 6-2, "Differences between statement level and transaction level optimizer hints" summarizes the differences between statement level optimizer hints and transaction level optimizer hints. Before using hints, confirm whether you need to update statistics, because the TimesTen optimizer is a cost based optimizer and uses statistics and indexes to generate a query plan. For more information on the query optimizer, see "The TimesTen Query Optimizer" in the Oracle TimesTen In-Memory Database Operations Guide.
Table 6-2 Differences between statement level and transaction level optimizer hints
Statement level optimizer hints | Transaction level optimizer hints |
---|---|
Specify within comment syntax and after a |
Specify by invoking the built-in procedures, |
Hint is scoped to SQL statement. This means you want to influence the optimizer at the statement level. |
Hint is scoped to the transaction. This means you want to influence the optimizer at the transaction level. |
Autocommit setting has no effect. This is because the hint is scoped to the SQL statement. After the statement is executed, the hint has no effect on future statements or queries. |
Autocommit has an effect. Set autocommit to off. If you do not set autocommit to off, then the hint has no effect because the statement is executed in its own transaction. When you set autocommit to off, the hint is in effect for the duration of the transaction (until you issue a commit or rollback). |
Optimizer uses the hint for the statement only. |
Optimizer uses the hint for all statements in the transaction. |
Hint is supported in |
Hint is not supported in |
If you specify the hint in a transaction in which transaction level optimizer hints are specified, the statement level optimizer hint overrides the transaction level hint for the SQL statement. After TimesTen executes the SQL statement, the original transaction level optimizer hint remains in effect for the duration of the transaction. |
Hints are in effect for the duration of the transaction. If you specify a statement level optimizer hint in a SQL statement, the statement level optimizer hint is in effect for the statement and the optimizer does not use the transaction level hint for the statement. After TimesTen executes the transaction, the original transaction level optimizer hint remains in effect for the duration of the transaction. |
Use statement level optimizer hints if you want to influence the optimizer for a specific statement. This means you have to specify a hint for each statement that you want to influence the optimizer. This could result in multiple alterations to your statements. |
Use transaction level optimizer hints to influence the optimizer for all statements in a transaction. This means you do not have to specify a hint for each statement. The hint is in effect for the duration of the transaction so the hint applies to all statements in the transaction. |
The following examples illustrate usages of statement level and transaction level optimizer hints. The TimesTen optimizer is a cost based query optimizer and generates what it thinks is the most optimal execution plan for your statement. The plan is based on the indexes that exist on the referenced tables as well as the column and table statistics that are available. When you recompute statistics or change indexes, the TimesTen optimizer may change the execution plan based on the recomputed statistics and index changes. Because the execution plan may vary, these examples are included for demonstration purposes only. Examples include:
Example 6-1, "Using statement level optimizer hints for a SELECT query"
Example 6-4, "Using the statement level optimizer hint TT_INDEX"
Example 6-1 Using statement level optimizer hints for a SELECT query
View the execution plan for a query. Then use statement level optimizer hints to influence the optimizer to choose a different execution plan. Consider the query:
Command> SELECT r.region_name, c.country_name > FROM regions r, countries c > WHERE r.region_id = c.region_id > ORDER BY c.region_id;
Use the ttIsql
EXPLAIN
command to view the plan generated by the optimizer. Note:
The optimizer performs two range scans using table level locking for both scans.
The optimizer uses the MergeJoin
operation to join the two tables.
Command> EXPLAIN SELECT r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: R.REGION_ID >= C.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL>
Now use statement level optimizer hints to direct the optimizer to perform the scans using row level locking and to use a NestedLoop
operation to join the tables. Set autocommit to on to illustrate that the autocommit setting has no effect because statement level optimizer hints are scoped to the SQL statement.
Command> autocommit on; Command> EXPLAIN SELECT /*+ TT_RowLock (1), TT_TblLock (0), TT_MergeJoin (0), > TT_NestedLoop (1) */ > r.region_name, c.country_name > FROM regions r, countries c > WHERE r.region_id = c.region_id > ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: RowLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: RowLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 4 LEVEL: 1 OPERATION: OrderBy TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Prepare the query again without statement level optimizer hints. The optimizer reverts back to the original execution plan because statement level optimizer hints are scoped to the SQL statement.
Command> EXPLAIN SELECT r.region_name, c.country_name FROM regions r, countries c WHERE r.region_id = c.region_id ORDER BY c.region_id; Query Optimizer Plan: STEP: 1 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: COUNTRIES IXNAME: COUNTR_REG_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: REGIONS IXNAME: REGIONS INDEXED CONDITION: R.REGION_ID >= C.REGION_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: C.REGION_ID = R.REGION_ID NOT INDEXED: <NULL>
Example 6-2 Using on and off hinting
This example illustrates the importance of directing the optimizer to specifically enable or disable hints that perform a similar function. For example, the hash and range hints direct the optimizer to use either a hash or range access path for the table. In order to ensure the optimizer chooses the specific access path, enable one hint and disable all other related hints.
Create a table and create a hash index on the first column of the table and a range index on the second column.
Command> CREATE TABLE test (col1 NUMBER, col2 NUMBER); Command> CREATE HASH INDEX h_index ON test (col1); Command> CREATE INDEX hr_index ON test (col2);
Set autocommit to off and execute the built-in procedure, ttOptGetFlag
, to review the current transaction level optimizer hint settings for the transaction. A setting of 1 means the flag is enabled.
Command> autocommit off; Command> CALL ttOptGetFlag ('Hash'); < Hash, 1 > 1 row found. Command> CALL ttOptGetFlag ('Scan'); < Scan, 1 > 1 row found.
Use the ttIsql
EXPLAIN
command to review the plan for a SELECT
query using a WHERE
clause and dynamic parameters. The optimizer uses a hash scan.
Command> EXPLAIN SELECT * FROM test WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2
Use the statement level optimizer hint TT_Range
to direct the optimizer to use a range scan. Note that the optimizer ignores the TT_Range
hint and uses a hash scan because you did not direct the optimizer to disable the hash scan. Alter the statement and direct the optimizer to use a range scan and not use a hash scan. To accomplish this, enable the statement level optimizer hint TT_Range
and disable the statement level optimizer hint TT_HashScan
. The optimizer no longer ignores the TT_Range
hint.
Command> EXPLAIN SELECT --+ TT_Range (1) Single line comment to set TT_Range > * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2 Command> EXPLAIN SELECT /*+ TT_Range (1) TT_HashScan (0) > Multiple line comment to enable TT_Range and disable TT_HashScan */ > * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkRangeScan TBLNAME: TEST IXNAME: HR_INDEX INDEXED CONDITION: TEST.COL2 = _QMARK_2 NOT INDEXED: TEST.COL1 = _QMARK_1
Prepare the query again without using statement level optimizer hints and without issuing a commit or rollback. The optimizer uses the transaction level optimizer hints settings that were in effect before executing the query. The optimizer uses transaction level optimizer hints because statement level optimizer hints are scoped to the SQL statement.
Command> EXPLAIN SELECT * FROM TEST WHERE col1 = ? and col2 = ?; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TEST IXNAME: H_INDEX INDEXED CONDITION: TEST.COL1 = _QMARK_1 NOT INDEXED: TEST.COL2 = _QMARK_2
Example 6-3 Using TT_JoinOrder to specify a join order
Use the statement level optimizer hint TT_JoinOrder
to direct the optimizer to use a specific join order. First use a transaction level optimizer hint to direct the optimizer to use a specific join order for the transaction. Then use a statement level optimizer hint to direct the optimizer to change the join order for the statement only.
Command> CALL ttOptSetOrder ('e d j'); Command> EXPLAIN SELECT * FROM employees e, departments d, job_history j WHERE e.department_id = d.department_id AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 5 LEVEL: 1 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Use the statement level optimizer hint, TT_JoinOrder
, to direct the optimizer to override the transaction level join order optimizer hint for the SQL statement only.
Command> EXPLAIN SELECT --+ TT_JoinOrder (e j d) > * > FROM employees e, departments d, job_history j > WHERE e.department_id = d.department_id > AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 3 LEVEL: 2 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 5 LEVEL: 1 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL>
Prepare the query again to verify that the join order that was in effect for the transaction remains in effect.
Command> EXPLAIN SELECT * > FROM employees e, departments d, job_history j > WHERE e.department_id = d.department_id > AND e.hire_date = j.start_date; Query Optimizer Plan: STEP: 1 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_DEPT_FK INDEXED CONDITION: <NULL> NOT INDEXED: <NULL> STEP: 2 LEVEL: 3 OPERATION: TblLkRangeScan TBLNAME: DEPARTMENTS IXNAME: DEPARTMENTS INDEXED CONDITION: D.DEPARTMENT_ID >= E.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 3 LEVEL: 2 OPERATION: MergeJoin TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: E.DEPARTMENT_ID = D.DEPARTMENT_ID NOT INDEXED: <NULL> STEP: 4 LEVEL: 2 OPERATION: TblLkRangeScan TBLNAME: JOB_HISTORY IXNAME: JOB_HISTORY INDEXED CONDITION: <NULL> NOT INDEXED: E.HIRE_DATE = J.START_DATE STEP: 5 LEVEL: 1 OPERATION: NestedLoop TBLNAME: <NULL> IXNAME: <NULL> INDEXED CONDITION: <NULL> NOT INDEXED: <NULL>
Example 6-4 Using the statement level optimizer hint TT_INDEX
Perform a query on the employees
table that uses the index, emp_name_ix
. Then use the statement level optimizer hint TT_INDEX
to direct the optimizer not to use this index. First run the ttIsql
command, indexes
, to view the indexes for the employees
table.
Command> indexes employees; Indexes on table TESTUSER.EMPLOYEES: EMPLOYEES: unique range index on columns: EMPLOYEE_ID (referenced by foreign key index JHIST_EMP_FK on table TESTUSER.JOB_HISTORY) TTUNIQUE_0: unique range index on columns: EMAIL EMP_DEPT_FK: non-unique range index on columns: DEPARTMENT_ID (foreign key index references table TESTUSER.DEPARTMENTS(DEPARTMENT_ID)) EMP_JOB_FK: non-unique range index on columns: JOB_ID (foreign key index references table TESTUSER.JOBS(JOB_ID)) EMP_NAME_IX: non-unique range index on columns: LAST_NAME FIRST_NAME 5 indexes found. 5 indexes found on 1 table.
Use the ttIsql
command, EXPLAIN
, to view the execution plan for a SELECT
query on the employees
table that uses a WHERE
clause on the last_name
column.
Command> EXPLAIN SELECT e.first_name > FROM employees e > WHERE e.last_name BETWEEN 'A' AND 'B'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMP_NAME_IX INDEXED CONDITION: E.LAST_NAME >= 'A' AND E.LAST_NAME <= 'B' NOT INDEXED: <NULL>
Use the statement level optimizer hint, TT_INDEX
, to direct the optimizer not to use the index, emp_name_ix
.
Command> EXPLAIN SELECT --+ TT_INDEX (E,EMP_NAME_IX,0) > e.first_name > FROM employees e > WHERE e.last_name BETWEEN 'A' AND 'B'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: TblLkRangeScan TBLNAME: EMPLOYEES IXNAME: EMPLOYEES INDEXED CONDITION: <NULL> NOT INDEXED: E.LAST_NAME <= 'B' AND E.LAST_NAME >= 'A'
You can change an active standby pair by:
Adding or dropping a subscriber database
Altering store attributes. Only the PORT
and TIMEOUT
attributes can be set for subscribers.
Including tables, sequences or cache groups in the replication scheme
Excluding tables, sequences or cache groups from the replication scheme
See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide.
ADMIN
ALTER ACTIVE STANDBY PAIR { SubscriberOperation | StoreOperation | InclusionOperation | NetworkOperation } [...]
Syntax for SubscriberOperation
:
{ADD | DROP } SUBSCRIBER FullStoreName
Syntax for StoreOperation
:
ALTER STORE FullStoreName SET StoreAttribute
Syntax for InclusionOperation
:
[{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]| CACHE GROUP [[Owner.]CacheGroupName [,...]]| SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]
Syntax for NetworkOperation
:
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...] DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...]
Parameter | Description |
---|---|
ADD SUBSCRIBER FullStoreName |
Indicates a subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
DROP SUBSCRIBER FullStoreName |
Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
ALTER STORE FullStoreName SET StoreAttribute |
Indicates changes to the attributes of a database. Only the PORT and TIMEOUT attributes can be set for subscribers. FullStoreName is the database file name specified in the DataStore attribute of the DSN description.
For information on |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
{INCLUDE|EXCLUDE}
|
Includes in or excludes from replication the tables, sequences or cache groups listed.
|
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ADD ROUTE clause, each master database is a subscriber of the other master database and each read-only subscriber is a subscriber of both master databases.
Can be specified more than once. For |
DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Drops NetworkOperation from replication scheme.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. Valid for both |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
You must stop the replication agent before altering an active standby pair. The exceptions are for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL
and DDL_REPLICATION_ACTION
attributes, as described in "ALTER SESSION".
You may only alter the active standby pair replication scheme on the active database. See "Making other changes to an active standby pair" in Oracle TimesTen In-Memory Database Replication Guide for more information.
You may not use ALTER ACTIVE STANDBY PAIR
when using Oracle Clusterware with TimesTen. See "Restricted commands and SQL statements" in Oracle TimesTen In-Memory Database Replication Guide for more information.
Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.
Use ADD SUBSCRIBER
FullStoreName
to add a subscriber to the replication scheme.
Use DROP SUBSCRIBER
FullStoreName
to drop a subscriber from the replication scheme.
Use the INCLUDE
or EXCLUDE
clause to include the listed tables, sequences or cache groups in the replication scheme or to exclude them from the replication scheme. Use one INCLUDE
or EXCLUDE
clause for each object type (table, sequence or cache group). The ALTER ACTIVE STANDBY
statement is not necessary for those objects and statements that are automatically replicated and included based on the values of the DDL_REPLICATION_LEVEL
and DDL_REPLICATION_ACTION
attributes, as described in "ALTER SESSION". However, if DDL_REPLICATION_LEVEL
=2 and DDL_REPLICATION_ACTION
="EXCLUDE
", use the INCLUDE
clause to include replicated objects into the replication scheme.
When DDL_REPLICATION_LEVEL
=2, the INCLUDE
clause can only be used with empty tables on the active database. The contents of the corresponding tables on the standby and any subscribers will be truncated before the table is added to the replication scheme.
Add a subscriber to the replication scheme.
ALTER ACTIVE STANDBY PAIR ADD SUBSCRIBER rep4;
Drop two subscribers from the replication scheme.
ALTER ACTIVE STANDBY PAIR DROP SUBCRIBER rep3 DROP SUBSCRIBER rep4;
Alter the store attributes of the rep3
and rep4
databases.
ALTER ACTIVE STANDBY PAIR ALTER STORE rep3 SET PORT 23000 TIMEOUT 180 ALTER STORE rep4 SET PORT 23500 TIMEOUT 180;
Add a table, a sequence and two cache groups to the replication scheme.
ALTER ACTIVE STANDBY PAIR INCLUDE TABLE my.newtab INCLUDE SEQUENCE my.newseq INCLUDE CACHE GROUP my.newcg1, my.newcg2;
Add NetworkOperation
clause to active standby pair:
ALTER ACTIVE STANDBY PAIR ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1;
CREATE ACTIVE STANDBY PAIR
DROP ACTIVE STANDBY PAIR
The ALTER CACHE GROUP
statement enables changes to the state, interval and mode of AUTOREFRESH
.
Updates on the Oracle Database tables can be propagated back to the TimesTen cache group with the use of AUTOREFRESH
. AUTOREFRESH
can be enabled when the cache group is a user managed cache group or is defined as READONLY
with an AUTOREFRESH
clause.
Any values or states set by ALTER CACHE GROUP
are persistent. They are stored in the database and survive daemon and cache agent restarts.
For a description of cache group types, see "User managed and system managed cache groups".
No privilege is required for the cache group owner.
ALTER ANY CACHE GROUP
for another user's cache group.
This statement changes the AUTOREFRESH
mode of the cache group, which determines which rows are updated during an autorefresh operation:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH MODE {INCREMENTAL | FULL}
This statement changes the AUTOREFRESH
interval on the cache group:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }
This statement alters the AUTOREFRESH
state:
ALTER CACHE GROUP [Owner.]GroupName SET AUTOREFRESH STATE {ON | OFF | PAUSED}
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the new cache group. |
AUTOREFRESH |
Indicates that changes to the Oracle Database tables should be automatically propagated to TimesTen. For details, see "AUTOREFRESH in cache groups". |
MODE |
Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle Database since the last propagation. If the FULL clause is specified or if there is neither FULL nor INCREMENTAL clause specified, TimesTen updates all rows in the cache with each autorefresh. The default mode is INCREMENTAL . |
INTERVAL
|
Indicates the interval at which autorefresh should occur in units of minutes, seconds or milliseconds. An integer value that specifies how often AUTOREFRESH should be scheduled, in minutes, seconds or milliseconds. The default value is 10 minutes. If the specified interval is not long enough for an AUTOREFRESH to complete, a runtime warning is generated and the next AUTOREFRESH waits until the current one finishes. An informational message is generated in the support log if the wait queue reaches 10. |
STATE |
Specifies whether AUTOREFRESH should be changed to on, off or paused. By default, the AUTOREFRESH STATE is ON . |
ON |
AUTOREFRESH is scheduled to occur at the specified interval. |
OFF |
A scheduled AUTOREFRESH is canceled, and TimesTen does not try to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, the first refresh is FULL . |
PAUSED |
A scheduled AUTOREFRESH is canceled, but TimesTen tries to maintain the information necessary for an INCREMENTAL refresh. Therefore if AUTOREFRESH is turned on again at a later time, a full refresh may not be necessary. |
A refresh does not occur immediately after issuing ALTER CACHE GROUP...SET AUTOREFRESH STATE
. This statement only changes the state of AUTOREFRESH
. When the transaction that contains the ALTER CACHE GROUP
statement is committed, the cache agent is notified to schedule an AUTOREFRESH
immediately, but the commit goes through without waiting for the completion of the refresh. The scheduling of the autorefresh operation is part of the transaction, but the refresh itself is not.
If you issue an ALTER CACHE GROUP... SET AUTOREFRESH STATE OFF
statement and there is an autorefresh operation currently running, then:
If LockWait
interval is 0, the ALTER
statement fails with a lock timeout error.
If LockWait
interval is nonzero, then the current autorefresh transaction is rolled back, and the ALTER
statement continues. This affects all cache groups with the same autorefresh interval.
Replication cannot occur between cache groups with AUTOREFRESH
and cache groups without AUTOREFRESH
.
If the ALTER CACHE GROUP
statement is part of a transaction that is being replicated, and if the replication scheme has the RETURN TWOSAFE
attribute, the transaction may fail.
You cannot execute the ALTER CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
CREATE CACHE GROUP
The ALTER FUNCTION
statement recompiles a standalone stored function. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.
To recompile a function that is part of a package, recompile the package using the ALTER PACKAGE
statement.
No privilege is required for the PL/SQL function owner.
ALTER ANY PROCEDURE
for another user's function.
ALTER FUNCTION [Owner.]FunctionName COMPILE [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] FunctionName |
Name of the function to be recompiled. |
COMPILE |
Required keyword that causes recompilation of the function. If the function does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages. |
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
The ALTER FUNCTION
statement does not change the declaration or definition of an existing function. To redeclare or redefine a function, use the CREATE FUNCTION
statement.
TimesTen first recompiles objects upon which the function depends, if any of those objects are invalid.
TimesTen also invalidates any objects that depend on the function, such as functions that call the recompiled function or package bodies that define functions that call the recompiled function.
If TimesTen recompiles the function successfully, then the function becomes valid. If recompiling the function results in compilation errors, then TimesTen returns an error and the function remains invalid. Use the ttIsql
command SHOW ERRORS
to display compilation errors.
During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS
clause.
CREATE FUNCTION
The ALTER PACKAGE
statement explicitly recompiles a package specification, package body, or both. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors.
This statement recompiles all package objects together. You cannot use the ALTER PROCEDURE
or ALTER FUNCTION
statement to individually recompile a procedure or function that is part of a package.
No privilege is required for the package owner.
ALTER ANY PROCEDURE
for another user's package.
ALTER PACKAGE [Owner.]PackageName COMPILE [PACKAGE|SPECIFICATION|BODY] [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] PackageName |
Name of the package to be recompiled. |
COMPILE |
Required clause used to force the recompilation of the package specification, package body, or both. |
[PACKAGE|SPECIFICATION|BODY ] |
Specify PACKAGE to recompile both the package specification and the body. Specify SPECIFICATION to recompile the package specification. Specify BODY t o recompile the package body.
|
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
When you recompile a package specification, TimesTen invalidates local objects that depend on the specification, such as procedures that call procedures or functions in the package. The body of the package also depends on the specification. If you subsequently reference one of these dependent objects without first explicitly recompiling it, then TimesTen recompiles it implicitly at runtime.
When you recompile a package body, TimesTen does not invalidate objects that depend on the package specification. TimesTen first recompiles objects upon which the body depends, if any of those objects are invalid. If TimesTen recompiles the body successfully, then the body become valid.
When you recompile a package, both the specification and the body are explicitly recompiled. If there are no compilation errors, then the specification and body become valid. If there are compilation errors, then TimesTen returns an error and the package remains invalid.
CREATE PACKAGE
The ALTER PROCEDURE
statement recompiles a standalone stored procedure. Explicit recompilation eliminates the need for implicit runtime recompilation and prevents associated runtime compilation errors and performance overhead.
To recompile a procedure that is part of a package, recompile the package using the ALTER PACKAGE
statement.
No privilege is required for the procedure owner.
ALTER ANY PROCEDURE
for another user's procedure.
ALTER PROCEDURE [Owner.]ProcedureName COMPILE [CompilerParametersClause [...]] [REUSE SETTINGS]
Parameter | Description |
---|---|
[ Owner .] ProcedureName |
Name of the procedure to be recompiled. |
COMPILE |
Required keyword that causes recompilation of the procedure. If the procedure does not compile successfully, use the ttIsql command SHOW ERRORS to display the compiler error messages. |
CompilerParametersClause |
Use this optional clause to specify a value for one of the PL/SQL persistent compiler parameters. The PL/SQL persistent compiler parameters are PLSQL_OPTIMIZE_LEVEL , PLSCOPE_SETTINGS and NLS_LENGTH_SEMANTICS .
You can specify each parameter once in the statement. If you omit a parameter from this clause and you specify |
REUSE SETTINGS |
Use this optional clause to prevent TimesTen from dropping and reacquiring compiler switch settings. When you specify REUSE SETTINGS , TimesTen preserves the existing settings and uses them for the compilation of any parameters for which values are not specified. |
The ALTER PROCEDURE
statement does not change the declaration or definition of an existing procedure. To redeclare or redefine a procedure, use the CREATE PROCEDURE
statement.
TimesTen first recompiles objects upon which the procedure depends, if any of those objects are invalid.
TimesTen also invalidates any objects that depend on the procedure, such as procedures that call the recompiled procedure or package bodies that define procedures that call the recompiled procedure.
If TimesTen recompiles the procedure successfully, then the procedure becomes valid. If recompiling the procedure results in compilation errors, then TimesTen returns an error and the procedure remains invalid. Use the ttIsql
command SHOW ERRORS
to display compilation errors.
During recompilation, TimesTen drops all persistent compiler settings, retrieves them again from the session, and stores them at the end of compilation. To avoid this process, specify the REUSE SETTINGS
clause.
Query the system view USER_PLSQL_OBJECT_SETTINGS
to check PLSQL_OPTIMIZE_LEVEL
and PLSCOPE_SETTINGS
for procedure query_emp
. Alter query_emp
by changing PLSQL_OPTIMIZE_LEVEL
to 3. Verify results.
Command> SELECT PLSQL_OPTIMIZE_LEVEL, PLSCOPE_SETTINGS > FROM user_plsql_object_settings WHERE name = 'QUERY_EMP'; < 2, IDENTIFIERS:NONE > 1 row found. Command> ALTER PROCEDURE query_emp COMPILE PLSQL_OPTIMIZE_LEVEL = 3; Procedure altered. Command> SELECT PLSQL_OPTIMIZE_LEVEL, PLSCOPE_SETTINGS > FROM user_plsql_object_settings WHERE name = 'QUERY_EMP'; < 3, IDENTIFIERS:NONE > 1 row found.
CREATE PROCEDURE
The ALTER REPLICATION
statement adds, alters, or drops replication elements and changes the replication attributes of participating databases involved in a classic replication scheme.
Most ALTER REPLICATION
operations are supported only when the replication agent is stopped (ttAdmin
-repStop
). However, it is possible to dynamically add a subscriber database to a replication scheme while the replication agent is running. See "Altering Replication" in Oracle TimesTen In-Memory Database Replication Guide for more information.
ADMIN
The ALTER REPLICATION
statement has the syntax:
ALTER REPLICATION [Owner.]ReplicationSchemeName ElementOperation [...] | StoreOperation | NetworkOperation [...]
Specify ElementOperation
one or more times:
ADD ELEMENT ElementName { DATASTORE | { TABLE [Owner.]TableName [CheckConflicts] } | SEQUENCE [Owner.]SequenceName } { MASTER | PROPAGATOR } FullStoreName { SUBSCRIBER FullStoreName [,... ] [ReturnServiceAttribute] } [ ... ] } { INCLUDE | EXCLUDE } { TABLE [[Owner.]TableName[,...]] | CACHE GROUP [[Owner.]CacheGroupName[,...]] | SEQUENCE [[Owner.]SequenceName[,...]] } [,...] ALTER ELEMENT { ElementName | * IN FullStoreName ] ADD SUBSCRIBER FullStoreName [,...] [ReturnServiceAttribute] | ALTER SUBSCRIBER FullStoreName [,...] | SET [ReturnServiceAttribute] | DROP SUBSCRIBER FullStoreName [,... ] ALTER ELEMENT * IN FullStoreName SET { MASTER | PROPAGATOR } FullStoreName ALTER ELEMENT ElementName {SET NAME NewElementName | SET CheckConflicts} ALTER ELEMENT ElementName { INCLUDE | EXCLUDE } { TABLE [Owner.]TableName | CACHE GROUP [Owner.]CacheGroupName | SEQUENCE [Owner.]SequenceName }[,...] DROP ELEMENT { ElementName | * IN FullStoreName }
CheckConflicts
can only be set when replicating TABLE
elements. The syntax is described in "CHECK CONFLICTS".
Syntax for ReturnServiceAttribute
is:
{ RETURN RECEIPT [BY REQUEST] | NO RETURN }
StoreOperation
clauses:
ADD STORE FullStoreName [StoreAttribute [... ]] ALTER STORE FullStoreName SET StoreAttribute [... ]
Syntax for the StoreAttribute
is:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN MilliSeconds LOCAL COMMIT ACTION {NO ACTION| COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value CONFLICT REPORTING SUSPEND AT Value CONFLICT REPORTING RESUME AT Value TABLE DEFINITION CHECKING {EXACT|RELAXED}
Specify NetworkOperation
one or more times:
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...] DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } [...]
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the classic replication scheme. |
ADD ELEMENT ElementName |
Adds a new element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 chars.
If the element is a |
ADD ELEMENT ElementName DATASTORE
|
Adds a new DATASTORE element to the existing classic replication scheme. ElementName is an identifier of up to 30 characters. With DATASTORE elements, the ElementName must be unique with respect to other DATASTORE element names within the first 20 chars.
If the element is a sequence, |
ADD SUBSCRIBER FullStoreName |
Indicates an additional subscriber database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
ALTER ELEMENT * IN FullStoreName
|
Makes a change to all elements for which FullStoreName is the MASTER or PROPAGATOR . FullStoreName is the database file name specified in the DataStore attribute of the DSN description.
This syntax can be used on a set of element names to:
|
ALTER ELEMENT ElementName |
Name of the element to which a subscriber is to be added or dropped. |
ALTER ELEMENT
|
Renames ElementName1 with the name ElementName2 . You can only rename elements of type TABLE . |
ALTER ELEMENT ElementName
|
ElementName is the name of the element to be altered.
If the element is a sequence, |
ALTER SUBSCRIBER FullStoreName
|
Indicates an alteration to a subscriber database to enable, disable, or change the return receipt service. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
CheckConflicts |
Check for replication conflicts when simultaneously writing to bidirectionally replicating TABLE elements between databases. You cannot check for conflicts when replicating elements of type DATASTORE . See "CHECK CONFLICTS". |
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
CONFLICT REPORTING SUSPEND AT Value |
Suspends conflict resolution reporting.
This clause is valid for table level replication. |
CONFLICT REPORTING RESUME AT Value |
Resumes conflict resolution reporting.
This clause is valid for table level replication. |
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures . Selecting SUBSCRIBER applies this policy only to the subscriber that fails to acknowledge replicated updates within the set timeout period. ALL applies this policy to all subscribers should any of the subscribers fail to respond. This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.
If See "Managing return service timeout errors and replication state changes" in Oracle TimesTen In-Memory Database Replication Guide for details. |
DURABLE COMMIT {ON | OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. |
DROP ELEMENT * IN FullStoreName |
Deletes the replication description of all elements for which FullStoreName is the MASTER . FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
DROP ELEMENT ElementName |
Deletes the replication description of ElementName . |
DROP SUBSCRIBER FullStoreName |
Indicates that updates should no longer be sent to the specified subscriber database. This operation fails if the classic replication scheme has only one subscriber. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.
The value 0 means "No Limit." This is the default. See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
LOCAL COMMIT ACTION
|
Specifies the default action to be taken for a RETURN TWOSAFE transaction in the event of a timeout.
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
PORT PortNumber |
The TCP/IP port number on which the replication agent on this database listens for connections. If not specified, the replication agent allocates a port number automatically.
All TimesTen databases that replicate to each other must use the same port number. |
PROPAGATOR FullStoreName |
The database that receives replicated updates and passes them on to other databases. |
RESUME RETURN MilliSeconds |
If return service blocking has been disabled by DISABLE RETURN , this attribute sets the policy on when to re-enable return service blocking. Return service blocking is re-enabled as soon as the failed subscriber acknowledges the replicated update in a period of time that is less than the specified MilliSeconds .
If |
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to a master database are blocked until the transaction is received by all subscribers.
|
RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
See "Managing return service timeout errors and replication state changes" in Oracle TimesTen In-Memory Database Replication Guide for details. |
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to a master database are blocked until the transaction is committed on all subscribers.
|
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. The default value is 10 seconds. A value of '0' means there is no timeout. Your application can override this timeout setting by calling the ttRepSyncSet procedure with the returnWait parameter. |
SET {MASTER | PROPAGATOR} FullStoreName |
Sets the given database to be the MASTER or PROPAGATOR of the given elements. The FullStoreName must the be database's file base name. |
SUBSCRIBER FullStoreName |
A database that receives updates from the MASTER databases. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
Specifies type of table definition checking that occurs on the subscriber:
The default is |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from the database. Default: 120 seconds. |
ADD ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Adds NetworkOperation to replication scheme. Enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores.
Can be specified more than once. For |
DROP ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Drops NetworkOperation from the classic replication scheme.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. Valid for both |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
ALTER ELEMENT DROP SUBSCRIBER
deletes a subscriber for a particular replication element.
ALTER ELEMENT SET NAME
may be used to change the name of a replication element when it conflicts with one already defined at another database. SET NAME
does not admit the use of * IN
FullStoreName
. The FullStoreName
must be the database's file base name. For example, if the database file name is data.ds0
, then data
is the file base name.
ALTER ELEMENT SET MASTER
may be used to change the master database for replication elements. The * IN
FullStoreName
option must be used for the MASTER
operation. That is, a master database must transfer ownership of all of its replication elements, thereby giving up its master role entirely. Typically, this option is used in ALTER REPLICATION
statements requested at SUBSCRIBER
databases after the failure of a (common) MASTER
.
To transfer ownership of the master elements to the subscriber:
Manually drop the replicated elements by executing an ALTER REPLICATION DROP ELEMENT
statement for each replicated table.
Use ALTER REPLICATION ADD ELEMENT
to add each table back to the replication scheme, with the newly designated MASTER
/ SUBSCRIBER
roles.
ALTER REPLICATION ALTER ELEMENT SET MASTER
does not automatically retain the old master as a subscriber in the scheme. If this is desired, execute an ALTER REPLICATION ALTER ELEMENT ADD SUBSCRIBER
statement.
Stop the replication agent before you use the NetworkOperation
clause.
You cannot alter the following replication schemes with the ALTER REPLICATION
statement:
Any active standby pair. Instead, use ALTER ACTIVE STANDBY PAIR
.
A Clusterware-managed active standby pair. Instead, perform the tasks described in "Changing the schema" section of the Oracle TimesTen In-Memory Database Replication Guide.
This example sets up a classic replication scheme for an additional table westleads
that is updated on database west
and replicated to database east
.
ALTER REPLICATION r1 ADD ELEMENT e3 TABLE westleads MASTER west ON "westcoast" SUBSCRIBER east ON "eastcoast";
This example adds an additional subscriber (backup
) to table westleads
.
ALTER REPLICATION r1 ALTER ELEMENT e3 ADD SUBSCRIBER backup ON "backupserver";
This example changes the element name of table westleads
from e3
to newelementname
.
ALTER REPLICATION r1 ALTER ELEMENT e3 SET NAME newelementname;
This example makes newwest
the master for all elements for which west
currently is the master.
ALTER REPLICATION r1 ALTER ELEMENT * IN west SET MASTER newwest;
This element changes the port number for east
.
ALTER REPLICATION r1 ALTER STORE east ON "eastcoast" SET PORT 22251;
This example adds my.tab1
table to the ds1
database element in my.rep1
replication scheme.
ALTER REPLICATION my.rep1 ALTER ELEMENT ds1 DATASTORE INCLUDE TABLE my.tab1;
This example adds my.cg1
cache group to ds1
database in my.rep1
replication scheme.
ALTER REPLICATION my.rep1 ALTER ELEMENT ds1 DATASTORE INCLUDE CACHE GROUP my.cg1;
This example adds ds1
database to my.rep1
replication scheme. Include my.tab2
table, my.cg2
cache group, and my.cg3
cache group in the database.
ALTER REPLICATION my.rep1 ADD ELEMENT ds1 DATASTORE MASTER rep2 SUBSCRIBER rep1, rep3 INCLUDE TABLE my.tab2 INCLUDE CACHE GROUP my.cg2, my.cg3;
This example adds ds2
database to a replication scheme but exclude my.tab1
table, my.cg0
cache group and my.cg1
cache group.
ALTER REPLICATION my.rep1 ADD ELEMENT ds2 DATASTORE MASTER rep2 SUBSCRIBER rep1 EXCLUDE TABLE my.tab1 EXCLUDE CACHE GROUP my.cg0, my.cg1;
Add NetworkOperation
clause:
ALTER REPLICATION r ADD ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4" PRIORITY 2;
Drop NetworkOperation
clause:
ALTER REPLICATION r DROP ROUTE MASTER repl ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" SUBSCRIBERIP "2.2.2.2" MASTERIP "3.3.3.3" SUBSCRIBERIP "4.4.4.4";
ALTER ACTIVE STANDBY PAIR
CREATE ACTIVE STANDBY PAIR
CREATE REPLICATION
DROP ACTIVE STANDBY PAIR
DROP REPLICATION
To drop a table from a database, see "Altering a replicated table" in Oracle TimesTen In-Memory Database Replication Guide.
The ALTER SESSION
statement changes session parameters dynamically.
None
ALTER SESSION SET {DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'} | DDL_REPLICATION_LEVEL={1|2} | NLS_SORT = {BINARY| SortName} | NLS_LENGTH_SEMANTICS = {BYTE|CHAR} | NLS_NCHAR_CONV_EXCP = {TRUE|FALSE} | ISOLATION_LEVEL = {SERIALIZABLE | READ COMMITTED} | PLSQL_TIMEOUT = n | PLSQL_OPTIMIZE_LEVEL = {0|1|2|3}| PLSCOPE_SETTINGS = {'IDENTIFIERS:ALL'|'IDENTIFIERS:NONE'} | PLSQL_CONN_MEM_LIMIT = n | REPLICATION_TRACK = TrackNumber | COMMIT_BUFFER_SIZE_MAX = n }
Parameter | Description |
---|---|
DDL_REPLICATION_ACTION={'INCLUDE'|'EXCLUDE'} |
To include a table in the active standby pair when the table is created, set the DDL_REPLICATION_ACTION connection attribute to INCLUDE . If you do not want to include a table in the active standby pair when the table is created, set DDL_REPLICATION_ACTION to EXCLUDE . The default is INCLUDE .
If set to This attribute is only valid if See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information. |
DDL_REPLICATION_LEVEL={1|2} |
Indicates whether DDL is replicated across all databases in an active standby pair. The value can be one of the following:
See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information. |
NLS_SORT= {BINARY| SortName } |
Indicates which collation sequence to use for linguistic comparisons.
Append If you do not specify For a complete list of supported values for For more information on case-insensitive or accent-insensitive sorting, see "Case-insensitive and accent-insensitive linguistic sorts" in Oracle TimesTen In-Memory Database Operations Guide. |
NLS_LENGTH_ SEMANTICS ={BYTE|CHAR} |
Sets the default length semantics configuration. BYTE indicates byte length semantics. CHAR indicates character length semantics. The default is BYTE .
For more information on length semantics, see "Length semantics and data storage" in Oracle TimesTen In-Memory Database Operations Guide. |
NLS_NCHAR_CONV_EXCP = {TRUE|FALSE} |
Determines whether an error should be reported when there is data loss during an implicit or explicit character type conversion between NCHAR /NVARCHAR2 data and CHAR /VARCHAR2 data. Specify TRUE to enable error reporting. Specify FALSE to not report errors. The default is FALSE . |
ISOLATION_LEVEL = {SERIALIZABLE|READ COMMITTED} |
Sets isolation level. Change takes effect starting with the next transaction.
For a descriptions of the isolation levels, see Oracle TimesTen In-Memory Database Operations Guide. |
PLSQL_TIMEOUT= n |
Controls how long PL/SQL procedures run before being automatically terminated. n represents the time, in seconds. Specify 0 for no time limit or any positive integer. The default is 30.
When you modify this value, the new value impacts PL/SQL program units that are currently running as well as any other program units subsequently executed in the same connection. If PL/SQL is not enabled in your database and you specify this attribute, TimesTen throws an error. |
PLSQL_OPTIMIZE_LEVEL = {0|1|2|3} |
Specifies the optimization level used to compile PL/SQL library units. The higher the setting, the more effort the compiler makes to optimize PL/SQL library units. Possible values are 0, 1, 2 or 3. The default is 2.
If PL/SQL is not enabled in your database and you specify this attribute, TimesTen returns an error. For more information, see "PLSQL_OPTIMIZE_LEVEL" in Oracle TimesTen In-Memory Database Reference. |
PLSCOPE_SETTINGS = '{IDENTIFIERS:ALL |IDENTIFIERS:NONE}' |
Controls whether the PL/SQL compiler generates cross-reference information. Specify IDENTIFIERS:ALL to generate cross-reference information. The default is IDENTIFIERS:NONE .
If PL/SQL is not enabled in your database and you specify this attribute, TimesTen returns an error. For more information, see "PLSCOPE_SETTINGS" in Oracle TimesTen In-Memory Database Reference. |
PLSQL_CONN_MEM_LIMIT = n |
Specifies the maximum amount of process heap memory that PL/SQL can use for this connection. n is an integer expressed in megabytes. The default is 100.
If PL/SQL is not enabled in your database and you specify this attribute, TimesTen returns an error. For more information, see "PLSQL_CONN_MEM_LIMIT" in Oracle TimesTen In-Memory Database Reference. |
REPLICATION_TRACK = TrackNumber |
When user-defined parallel replication is configured, specify the track to which the transactions belong for the current connection. All transactions on replicated tables are associated with a track. The track number setting is constant for the lifetime of the connection, unless specifically reset. The default track number is 0.
If the number specified is for a non-existent replication track You cannot change tracks in the middle of a transaction unless all preceding operations have been read operations. For more information, see "Configuring user-defined parallel replication for other replication schemes" in Oracle TimesTen In-Memory Database Replication Guide. Note: User-defined parallel replication is not supported for active standby pairs. |
COMMIT_BUFFER_SIZE_MAX= n |
Changes the maximum size of the commit buffer when a connection is in progress. n is expressed as an integer and represents the maximum size of the commit buffer (in MB).
Change takes effect starting with the next transaction. Use For more information on the commit buffer and transaction reclaim operations, see "Transaction reclaim operations" in the Oracle TimesTen In-Memory Database Operations Guide and "CommitBufferSizeMax" in the Oracle TimesTen In-Memory Database Reference. |
The ALTER SESSION
statement affects commands that are subsequently executed by the session. The new session parameters take effect immediately.
Operations involving character comparisons support linguistic sensitive collating sequences. Case-insensitive sorts may affect DISTINCT
value interpretation.
Implicit and explicit conversions between CHAR
and NCHAR
are supported.
Conversions between CHAR
and NCHAR
are not allowed when using the TIMESTEN8
character set.
You can use the SQL string functions with the supported character sets. For example, UPPER
and LOWER
functions support non-ASCII
CHAR
and VARCHAR2
characters as well as NCHAR
and NVARCHAR2
characters.
Choice of character set could have an impact on memory consumption for CHAR
and VARCHAR2
column data.
The character sets of all databases involved in a replication scheme must match.
To add an existing table to an active standby pair, set DDL_REPLICATION_LEVEL=2
and DDL_REPLICATION_ACTION
to INCLUDE
. Alternatively, you can use the ALTER ACTIVE STANDBY PAIR INCLUDE TABLE
statement if DDL_REPLICATION_ACTION
is set to EXCLUDE
. In this case, the table must be empty and present on all databases before executing the ALTER ACTIVE STANDBY PAIR INCLUDE TABLE
statement as the table contents will be truncated when this statement is executed.
Objects are only replicated to TimesTen instances of release 11.2.1.8 or greater that are in a replication scheme using an active standby pair.
Use the ALTER
SESSION
statement to change COMMIT_BUFFER_SIZE_MAX
to 500 MB. First, call ttConfiguration
to display the current connection setting. Use the ALTER
SESSION
statement to change the COMMIT_BUFFER_SIZE_MAX
setting to 500. Call ttConfiguration
to display the new setting.
Command> CALL ttConfiguration ('CommitBufferSizeMax'); < CommitBufferSizeMax, 0 > 1 row found. Command> ALTER SESSION SET COMMIT_BUFFER_SIZE_MAX = 500; Session altered. Command> CALL ttConfiguration ('CommitBufferSizeMax'); < CommitBufferSizeMax, 500 > 1 row found.
Use the ALTER SESSION
statement to change PLSQL_TIMEOUT
to 60 seconds. Use a second ALTER SESSION
statement to change PLSQL_OPTIMIZE_LEVEL
to 3. Then call ttConfiguration
to display the new values.
Command> ALTER SESSION SET PLSQL_TIMEOUT = 60; Session altered.Command> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 3; Session altered. Command> CALL TTCONFIGURATION (); < CkptFrequency, 600 > < CkptLogVolume, 0 > < CkptRate, 0 > ... < PLSQL_OPTIMIZE_LEVEL, 3 > < PLSQL_TIMEOUT, 60 > ... 47 rows found.
In this example, set PLSQL_TIMEOUT
to 20 seconds. Attempt to execute a program that loops indefinitely. In 20 seconds, execution is terminated and an error is returned.
Command> ALTER SESSION SET PLSQL_TIMEOUT = 20; Command> DECLARE v_timeout NUMBER; > BEGIN > LOOP > v_timeout :=0; > EXIT WHEN v_timeout < 0; > END LOOP; > END; > / 8509: PL/SQL execution terminated; PLSQL_TIMEOUT exceeded
Call ttConfiguration
to display the current PLSCOPE_SETTINGS
value. Use the ALTER SESSION
statement to change the PLSCOPE_SETTINGS
value to IDENTIFIERS:ALL
. Create a dummy procedure p
. Query the system view SYS.USER_PLSQL_OBJECT_SETTINGS
to confirm that the new setting is applied to procedure p
.
Command> CALL TTCONFIGURATION (); < CkptFrequency, 600 > < CkptLogVolume, 0 > < CkptRate, 0 > ... < PLSCOPE_SETTINGS, IDENTIFIERS:NONE > ... 47 rows found. Command> ALTER SESSION SET PLSCOPE_SETTINGS = 'IDENTIFIERS:ALL'; Session altered. Command> CREATE OR REPLACE PROCEDURE p IS > BEGIN > NULL; > END; > / Procedure created. Command> SELECT PLSCOPE_SETTINGS FROM SYS.USER_PLSQL_OBJECT_SETTINGS WHERE > NAME = 'p'; < IDENTIFIERS:ALL > 1 row found.
The following example uses the ALTER SESSION
statement to change the NLS_SORT
setting from BINARY
to BINARY_CI
to BINARY_AI
. The database and connection character sets are WE8ISO8859P1
.
Command> connect "dsn=cs;ConnectionCharacterSet=WE8ISO8859P1"; Connection successful: DSN=cs;UID=user;DataStore=/datastore/user/cs; DatabaseCharacterSet=WE8ISO8859P1; ConnectionCharacterSet=WE8ISO8859P1;PermSize=32;TypeMode=0; (Default setting AutoCommit=1) Command>#Create the Table Command> CREATE TABLE collatingdemo (letter VARCHAR2 (10)); Command>#Insert values Command> INSERT INTO collatingdemo VALUES ('a'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('A'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('Y'); 1 row inserted. Command> INSERT INTO collatingdemo VALUES ('ä'); 1 row inserted. Command>#SELECT Command> SELECT * FROM collatingdemo; < a > < A > < Y > < ä > 4 rows found. Command>#SELECT with ORDER BY Command> SELECT * FROM collatingdemo ORDER BY letter; < A > < Y > < a > < ä > 4 rows found. Command>#set NLS_SORT to BINARY_CI and SELECT Command> ALTER SESSION SET NLS_SORT = BINARY_CI; Command> SELECT * FROM collatingdemo ORDER BY letter; < a > < A > < Y > < Ä > < ä > 4 rows found. Command>#Set NLS_SORT to BINARY_AI and SELECT Command> ALTER SESSION SET NLS_SORT = BINARY_AI; Command> SELECT * FROM collatingdemo ORDER BY letter; < ä > < a > < A > < Y > 4 rows found.
The following example enables user-defined parallel replication (not supported for active standby pairs) and uses the ALTER SESSION
statement to change the replication track number to 5 for the current connection. To enable parallel replication for replication schemes, set ReplicationApplyOrdering
to 1. Then, always set REPLICATION_TRACK
to a number less than or equal to ReplicationParallelism
. For example, the ReplicationParallelism
connection attribute could be set to 6, which is higher than the value of 5 set for REPLICATION_TRACK
.
Command> ALTER SESSION SET REPLICATION_TRACK = 5; Session altered.
The following example enables replication of adding and dropping columns, tables, synonyms and indexes by setting the following on the active database in an alter standby replication pair: DDLReplicationLevel
to 2
and DDLReplicationAction
to 'INCLUDE'
.
Command > ALTER SESSION SET DDL_REPLICATION_LEVEL=2; Session altered. Command > ALTER SESSION SET DDL_REPLICATION_ACTION='INCLUDE'; Session altered.
The ALTER TABLE
statement changes an existing table definition.
No privilege is required for the table owner.
ALTER ANY TABLE
for another user's table.
For ALTER TABLE...ADD FOREIGN KEY
, the owner of the altered table must have the REFERENCES
privilege on the table referenced by the foreign key clause.
To add one column:
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [COMPRESS (CompressColumns [,...])]
or to add multiple columns:
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [UNIQUE] [NULL] [,... ] ) [COMPRESS (CompressColumns [,...])]
To add a NOT
NULL
column (Note: The default clause is required):
ALTER TABLE [Owner.]TableName ADD [COLUMN] ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [COMPRESS (CompressColumns [,...])]
To add multiple NOT
NULL
columns (Note: The default clause is required):
ALTER TABLE [Owner.]TableName ADD (ColumnName ColumnDataType NOT NULL [ENABLE] DEFAULT DefaultVal [[NOT] INLINE] [UNIQUE] [,...]) [COMPRESS (CompressColumns [,...])]
The CompressColumns
syntax is as follows:
{ColumnDefinition | (ColumnDefinition [,...])} BY DICTIONARY [MAXVALUES = CompressMax]
To remove columns. If removing columns in a compressed column group, all columns in the compressed column group must be specified.
ALTER TABLE [Owner.]TableName DROP {[COLUMN] ColumnName | (ColumnName [,... ] )}
To add a primary key constraint using a range index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ])
To add a primary key constraint using a hash index:
ALTER TABLE [Owner.]TableName ADD CONSTRAINT ConstraintName PRIMARY KEY (ColumnName [,... ]) USE HASH INDEX PAGES = RowPages | CURRENT
To add a foreign key and optionally add ON DELETE CASCADE
:
ALTER TABLE [Owner.]TableName ADD [CONSTRAINT ForeignKeyName] FOREIGN KEY (ColumnName [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]
To remove a foreign key:
ALTER TABLE [Owner.]TableName DROP CONSTRAINT ForeignKeyName
To resize a hash index:
ALTER TABLE [Owner.]TableName SET PAGES = RowPages | CURRENT
To change the primary key to use a hash index:
ALTER TABLE [Owner.]TableName USE HASH INDEX PAGES = RowPages | CURRENT
To change the primary key to use a range index with the USE RANGE INDEX
clause:
ALTER TABLE [Owner.]TableName USE RANGE INDEX
To change the default value of a column:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT DefaultVal)
To add or drop a unique constraint on a column:
ALTER TABLE Owner.]TableName {ADD | DROP} UNIQUE (ColumnName)
To remove the default value of a column that is nullable, by changing it to NULL
:
ALTER TABLE [Owner.]TableName MODIFY (ColumnName DEFAULT NULL)
To add LRU aging:
ALTER TABLE [Owner.]TableName ADD AGING LRU [ON | OFF]
To add time-based aging:
ALTER TABLE [Owner.]TableName ADD AGING USE ColumnName LIFETIME num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]} [CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] }] [ON | OFF]
To change the aging state:
ALTER TABLE [Owner.]TableName SET AGING {ON | OFF}
To drop aging:
ALTER TABLE [Owner.]TableName DROP AGING
To change the lifetime for time-based aging:
ALTER TABLE [Owner.]TableName SET AGING LIFETIME num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}
To change the cycle for time-based aging:
ALTER TABLE [Owner.]TableName SET AGING CYCLE num2 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S]}
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be altered. |
UNIQUE |
Specifies that in the column ColumnName each row must contain a unique value. |
MODIFY |
Specifies that an attribute of a given column is to be changed to a new value. |
DEFAULT [ DefaultVal |NULL] |
Specifies that the column has a default value, DefaultVal . If NULL , specifies that the default value of the columns is to be dropped. If a column with a default value of SYSDATE is added, the value of the column of the existing rows only is the system date at the time the column was added. If the default value is one of the USER functions the column value is the user value of the session that executed the ALTER TABLE statement. Currently, you cannot assign a default value for the ROWID data type.
Altering the default value of a column has no impact on existing rows. |
ColumnName |
Name of the column participating in the ALTER TABLE statement. A new column cannot have the same name as an existing column or another new column. If you add a NOT NULL column, you must include the DEFAULT clause. |
ColumnDataType |
Type of the column to be added. Some types require additional parameters. See Chapter 1, "Data Types" for the data types that can be specified. |
NOT NULL [ENABLE] |
If you add a column, you can specify NOT NULL . If you specify NOT NULL , then you must include the DEFAULT clause. Optionally, you can specify ENABLE after the NOT NULL clause. Because NOT NULL constraints are always enabled, you are not required to specify ENABLE . |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is > 128 bytes are stored out of line. Variable-length columns whose declared column length is <= 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords. |
COMPRESS ( CompressColumns [,...]) |
Defines a compressed column group for a table that is enabled for compression. This can include one or more columns in the table.
If you define multiple columns for a compression group, you must specify the columns as Each compressed column group is limited to a maximum of 16 columns. For more details on compression columns, see "In-memory columnar compression of tables". |
BY DICTIONARY |
Defines a compression dictionary for each compressed column group. |
MAXVALUES = CompressMax |
CompressMax is the total number of distinct values in the table and sets the size for the compressed column group pointer column to 1, 2, or 4 bytes and sets the size for the maximum number of entries in the dictionary table.
For the dictionary table,
The maximum size defaults to size of 232-1 if the For more details on maximum sizing for compression dictionaries, see "In-memory columnar compression of tables". |
ADD CONSTRAINT ConstraintName PRIMARY KEY ( ColumnName
|
Adds a primary key constraint to the table. Columns of the primary key must be defined as NOT NULL .
Specify Specify the If you specify The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column Definition". |
CONSTRAINT |
Specifies that a foreign key is to be dropped. Optionally specifies that an added foreign key is named by the user. |
ForeignKeyName |
Name of the foreign key to be added or dropped. All foreign keys are assigned a default name by the system if the name was not specified by the user. Either the user-provided name or system name can be specified in the DROP FOREIGN KEY clause. |
FOREIGN KEY |
Specifies that a foreign key is to be added or dropped. See "FOREIGN KEY ". |
REFERENCES |
Specifies that the foreign key references another table. |
RefTableName |
The name of the table that the foreign key references. |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
USE HASH INDEX PAGES = RowPages | CURRENT |
Changes primary key to use a hash index. If the primary key already uses a hash index, then this clause is equivalent to the SET PAGES clause. |
USE RANGE INDEX |
Changes primary key to use a range index. If the primary key already uses a range index, TimesTen ignores this clause. |
SET PAGES = RowPages | CURRENT |
Resizes the hash index to reflect the expected number of pages in the table. If you specify CURRENT , the current number of rows in the table is used to calculate the page count value. If you specify RowPages , the number of pages is used. To determine the value for RowPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for RowPages (256000/256=1000).
The value for TimesTen recommends that you do not specify If your estimate is too small, performance may be degraded. For more information on hash indexes, see "Column Definition". |
ADD AGING LRU [ON | OFF] |
Adds least recently used (LRU) aging to an existing table that has no aging policy defined.
The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state ( Set the aging state to either LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
ADD AGING USE ColumnName ... [ON| OFF] |
Adds time-based aging to an existing table that has no aging policy defined.
The time-based aging policy defines the type of aging (time-based), the aging state ( Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be You can define your aging column with a data type of For more information about time-based aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
LIFETIME Num1 {SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] |
Specify the LIFETIME clause after the ADD AGING USE ColumnName clause if you are adding the time-based aging policy to an existing table. Specify the LIFETIME clause after the SET AGING clause to change the LIFETIME setting.
The Specify The concept of time resolution is supported. If |
CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]} |
Specify the optional CYCLE clause after the LIFETIME clause if you are adding the time-based aging policy to an existing table.
The Specify If you do not specify the If the aging state is Specify the |
SET AGING {ON|OFF} |
Changes the aging state. The aging policy must be previously defined. ON enables automatic aging. OFF disables automatic aging. To control aging with an external scheduler, then disable aging and invoke the ttAgingScheduleNow built-in procedure. |
DROP AGING |
Drops the aging policy from the table. After you define an aging policy, you cannot alter it. Drop aging, then redefine. |
SET AGING LIFETIME Num1 {SECOND[S]| MINUTE[S]|HOUR[S] |DAY[S]} |
Use this clause to change the lifetime for time-based aging.
If you defined your aging column with data type |
SET AGING CYCLE Num2 {SECOND[S]| MINUTE[S]| HOUR[S]|DAY[S]} |
Use this clause to change the cycle for time-based aging.
|
Understanding partitions when using ALTER TABLE
When you create a table, an initial partition is created. If you ALTER
the table, and add additional columns, secondary partitions are created. There is one secondary partition created for each ALTER
TABLE
statement. For a column in secondary partitions, you cannot create a primary key constraint on the column or use the column for time-based aging.
You can use ttMigrate
-r
-relaxedUpgrade
to condense multiple partitions. This means the initial partition plus one or more secondary partitions are condensed into a single partition called the initial partition. Once you condense the partitions, you can then ALTER
the table and add a primary key constraint on the column or use the column for time-based aging. This is because the columns are no longer in secondary partitions but are now in the initial partition.
If your database is involved in replication and you want to condense multiple partitions, you must use the StoreAttribute
TABLE
DEFINITION
CHECKING
RELAXED
(of the CREATE
REPLICATION
statement). Run ttMigrate
-r
-relaxedUpgrade
on both the master and subscriber or on either the master or subscriber by using -duplicate
.
Use ttSchema
to view partition numbers for columns. ttSchema
displays secondary partition number 1 as partition 1, secondary partition number 2 as partition 2 and so on.
As an example, create a table MyTab
with 2 columns. Then ALTER
the table adding 2 columns (Col3
and Col4
) with the NOT
NULL
DEFAULT
clause.
Command> CREATE TABLE MyTab (Col1 NUMBER, Col2 VARCHAR2 (30)); Command> ALTER TABLE MyTab ADD (Col3 NUMBER NOT NULL DEFAULT 10, Col4 TIMESTAMP NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');
Use ttSchema
to verify Col3
and Col4
are in secondary partition 1.
ttschema -DSN sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00'); -- column COL3 partition 1 -- column COL4 partition 1
Attempt to add a primary key constraint on Col3
and time-based aging on Col4
. You see errors because you can neither add a primary key constraint nor add time-based aging to a column that is not in the initial partition.
Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3); 2419: All columns in a primary key constraint must be in the initial partition; column COL3 was added by ALTER TABLE The command failed. Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS; 3023: Aging column must be in the initial partition; column COL4 was added by ALTER TABLE The command failed.
Use ttMigrate
with the -relaxedUpgrade
option to condense the partitions. Then use ttSchema
to verify the partitions are condensed and there are no columns in secondary partition 1.
ttMigrate -c dsn=sampledb_1122 test.migrate Saving user PUBLIC User successfully saved. Saving table TESTUSER.MYTAB Saving rows... 0/0 rows saved. Table successfully saved. ttDestroy sampledb_1122 ttMigrate -r -relaxedUpgrade dsn=sampledb_1122 test.migrate Restoring table TESTUSER.MYTAB Restoring rows... 0/0 rows restored. Table successfully restored. ttSchema DSN=sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00');
Now add a primary key constraint on Col3
and time-based aging on Col4
. The results are successful because Col3
and Col4
are in the initial partition as a result of ttMigrate
. Use ttSchema
to verify results.
Command> ALTER TABLE MyTab ADD CONSTRAINT PriKey PRIMARY KEY (Col3); Command> ALTER TABLE MyTab ADD AGING USE Col4 LIFETIME 3 DAYS; ttschema sampledb_1122 -- Database is in Oracle type mode create table TESTUSER.MYTAB ( COL1 NUMBER, COL2 VARCHAR2(30 BYTE) INLINE, COL3 NUMBER NOT NULL DEFAULT 10, COL4 TIMESTAMP(6) NOT NULL DEFAULT TIMESTAMP '2012-09-03 12:00:00') AGING USE COL4 LIFETIME 3 days CYCLE 5 minutes ON; alter table TESTUSER.MYTAB add constraint PRIKEY primary key (COL3);
The ALTER TABLE
statement cannot be used to alter a temporary table.
The ALTER TABLE ADD [COLUMN]
ColumnName
statement adds one or more new columns to an existing table. The new columns are added to the end of all existing rows of the table in one new partition. The ALTER TABLE
ADD
or DROP COLUMN
statement can be used to add or drop columns from replicated tables.
Do not use ALTER
TABLE
to alter a replicated table that is part of a TWOSAFE BY REQUEST
transaction. If you attempt to do this when DDLCommitBehavior
=1, TimesTen throws an error. If you attempt to do this when DDLCommitBehavior
=0, the ALTER
TABLE
operation is not part of the TWOSAFE
BY
REQUEST
transaction. This is because the ALTER
TABLE
operation does an implicit commit and operates in its own transaction. The ALTER
TABLE
statement succeeds but the transaction is not part of the TWOSAFE
BY
REQUEST
transaction.
Columns referenced by materialized views cannot be dropped.
You cannot use the ALTER
TABLE
statement to add a column, drop a column, or add a constraint for cache group tables.
Only one partition is added to the table per statement regardless of the number of columns added.
You can ALTER
a table to add a NOT
NULL
column with a default value. The DEFAULT
clause is required. Restrictions include:
You cannot add a NOT
NULL
column to a table that is part of a replication scheme. You must remove the table from the replication scheme first before you can add a NOT
NULL
column to it.
You cannot use the column as a primary key column. Specifically, you cannot specify the column in the statement: ALTER
TABLE
ADD
ConstraintName
PRIMARY
KEY
(
ColumnName
[,...])
.
You cannot use the column for time-based aging. Specifically, you cannot specify the column in the statement ALTER TABLE ADD AGING USE
ColumnName
.
NULL
is the initial value for all added columns, unless a default value is specified for the new column.
The total number of columns in the table cannot exceed 1000. In addition, the total number of partitions in a table cannot exceed 1000, one of which is used by TimesTen.
Use the ADD CONSTRAINT ... PRIMARY KEY
clause to add a primary key constraint to a regular table or to a detailed or materialized view table. Do not use this clause on a table that already has a primary key.
If you use the ADD CONSTRAINT... PRIMARY KEY
clause to add a primary key constraint, and you do not specify the USE HASH INDEX
clause, then a range index is used for the primary key constraint.
If a table is replicated and the replication agent is active, you cannot use the ADD CONSTRAINT ... PRIMARY KEY
clause. Stop the replication agent first.
Do not specify the ADD CONSTRAINT ... PRIMARY KEY
clause on a global temporary table.
Do not specify the ADD CONSTRAINT ... PRIMARY KEY
clause on a cache group table because cache group tables defined with a primary key must be defined in the CREATE CACHE GROUP
statement.
As the result of an ALTER TABLE ADD
statement, an additional read occurs for each new partition during queries. Therefore, altered tables may have slightly degraded performance. The performance can only by restored by dropping and recreating the table, or by using the ttMigrate create -c
-relaxedUpgrade
command, and restoring the table using the ttRestore -r
-relaxedUpgrade
command. Dropping the added column does not recover the lost performance or decrease the number of partitions.
The ALTER TABLE DROP
statement removes one or more columns from an existing table. The dropped columns are removed from all current rows of the table. Subsequent SQL statements must not attempt to make any use of the dropped columns. You cannot drop columns that are in the table's primary key. You cannot drop columns that are in any of the table's foreign keys until you have dropped all foreign keys. You cannot drop columns that are indexed until all indexes on the column have been dropped. ALTER TABLE
cannot be used to drop all of the columns of a table. Use DROP TABLE
instead.
When a column is dropped from a table, all commands referencing that table need to be recompiled. An error may result at recompilation time if a dropped column was referenced. The application must re-prepare those commands, and rebuild any parameters and result columns. When a column is added to a table, the commands that contain a SELECT *
statement are invalidated. Only these commands must be re-prepared. All other commands continue to work as expected.
When you drop a column, the column space is not freed.
When you add a UNIQUE
constraint, there is overhead incurred (in terms of additional space and additional time). This is because an index is created to maintain the UNIQUE
constraint. You cannot use the DROP INDEX
statement to drop an index used to maintain the UNIQUE
constraint.
A UNIQUE
constraint and its associated index cannot be dropped if it is being used as a unique index on a replicated table.
Use ALTER TABLE...USE RANGE INDEX
if your application performs range queries over a table's primary key.
Use ALTER TABLE...USE HASH INDEX
if your application performs exact match lookups on a table's primary key.
An error is generated if a table has no primary key and either the USE HASH INDEX
clause or the USE RANGE INDEX
clause is specified.
Make sure to stop the replication agent before adding or dropping a foreign key on a replicated table.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
, since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths.
Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, a second rule is also enforced:
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
For ON DELETE CASCADE
with replication, the following restrictions apply:
The foreign keys specified with ON DELETE CASCADE
must match between the Master and subscriber for replicated tables. Checking is done at runtime. If there is an error, the receiver thread stops working.
All tables in the delete cascade tree have to be replicated if any table in the tree is replicated. This restriction is checked when the replication scheme is created or when a foreign key with ON DELETE CASCADE
is added to one of the replication tables. If an error is found, the operation is aborted. You may be required to drop the replication scheme first before trying to change the foreign key constraint.
The ALTER TABLE ADD/DROP CONSTRAINT
statement has the following restrictions:
When a foreign key is dropped, TimesTen also drops the index associated with the foreign key. Attempting to drop an index associated with a foreign key using the regular DROP INDEX
statement results in an error.
Foreign keys cannot be added or dropped on tables in a cache group.
Foreign keys cannot be added or dropped on views or temporary tables.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT ... SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
Call the ttAgingScheduleNow
procedure to schedule the aging process right away regardless if the aging state is ON
or OFF
.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
Aging restrictions:
You cannot drop the column that is used for time-based aging.
Tables that are related by foreign keys must have the same aging policy.
For LRU aging, if a child row is not a candidate for aging, neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
Restrictions for in-memory columnar compression of tables:
You can add compressed column groups with the ALTER TABLE
statement only if the table was enabled for compression at table creation. You can add uncompressed columns to any table, including tables enabled for compression. A table is enabled for compression during creation when OPTIMIZED FOR READ
is specified. Refer to "In-memory columnar compression of tables" for more details on adding compressed column groups to a table.
You cannot modify columns of a compressed column group.
You can drop all columns within a compressed column group with the ALTER TABLE
command; when removing columns in a compressed column group, all columns in the compressed column group must be specified for removal.
You cannot use ALTER TABLE
to modify an existing uncompressed column to make it compressed. For example:
Command> create table foo1 (a varchar2 (30), b int, c int) compress ((a,b) by dictionary) optimized for read; Command> alter table foo1 add (d int) compress (c by dictionary); 2246: Cannot change compression clause for already defined column C The command failed.
Add returnrate
column to parts
table.
ALTER TABLE parts ADD COLUMN returnrate DOUBLE;
Add numsssign
and prevdept
columns to contractor
table.
ALTER TABLE contractor ADD ( numassign INTEGER, prevdept CHAR(30) );
Remove addr1
and addr2
columns from employee
table.
ALTER TABLE employee DROP ( addr1, addr2 );
Drop the UNIQUE
title column of the books
table.
ALTER TABLE books DROP UNIQUE (title);
Add the x1
column to the t1
table with a default value of 5:
ALTER TABLE t1 ADD (x1 INT DEFAULT 5);
Change the default value of column x1
to 2:
ALTER TABLE t1 MODIFY (x1 DEFAULT 2);
Alter table primarykeytest
to add the primary key constraint c1
. Use the ttIsql
INDEXES
command to show that the primary key constraint c1
is created and a range index is used:
Command> CREATE TABLE primarykeytest (col1 TT_INTEGER NOT NULL); Command> ALTER TABLE primarykeytest ADD CONSTRAINT c1 > PRIMARY KEY (col1); Command> INDEXES primarykeytest; Indexes on table SAMPLEUSER.PRIMARYKEYTEST: C1: unique range index on columns: COL1 1 index found. 1 index found on 1 table.
Alter table prikeyhash
to add the primary key constraint c2
using a hash index. Use the ttIsql
INDEXES
command to show that the primary key constraint c2
is created and a hash index is used:
Command> CREATE TABLE prikeyhash (col1 NUMBER (3,2) NOT NULL); Command> ALTER TABLE prikeyhash ADD CONSTRAINT c2 > PRIMARY KEY (col1) USE HASH INDEX PAGES = 20; Command> INDEXES prikeyhash; Indexes on table SAMPLEUSER.PRIKEYHASH: C2: unique hash index on columns: COL1 1 index found. 1 table found.
Attempt to add a primary key constraint on a table already defined with a primary key. You see an error:
Command> CREATE TABLE oneprikey (col1 VARCHAR2 (30) NOT NULL, > col2 TT_BIGINT NOT NULL, col3 CHAR (15) NOT NULL, > PRIMARY KEY (col1,col2)); Command> ALTER TABLE oneprikey ADD CONSTRAINT c2 > PRIMARY KEY (col1,col2); 2235: Table can have only one primary key The command failed.
Attempt to add a primary key constraint on a column that is not defined as NOT NULL
. You see an error:
Command> CREATE TABLE prikeynull (col1 CHAR (30)); Command> ALTER TABLE prikeynull ADD CONSTRAINT c3 > PRIMARY KEY (col1); 2236: Nullable column cannot be part of a primary key The command failed.
This example illustrates the use of range and hash indexes. It creates the pkey
table with col1
as the primary key. A range index is created by default. The table is then altered to change the index on col1
to a hash index. The table is altered again to change the index back to a range index.
Command> CREATE TABLE pkey (col1 TT_INTEGER PRIMARY KEY, col2 VARCHAR2 (20)); Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique range index on columns: COL1 1 index found. 1 index found on 1 table.
Alter the pkey
table to use a hash index:
Command> ALTER TABLE pkey USE HASH INDEX PAGES = CURRENT; Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique hash index on columns: COL1 1 index found. 1 table found.
Alter the pkey
table to use a range index with the USE RANGE INDEX
clause:
Command> ALTER TABLE pkey USE RANGE INDEX; Command> INDEXES pkey; Indexes on table SAMPLEUSER.PKEY: PKEY: unique range index on columns: COL1 1 index found. 1 table found.
This example generates an error when attempting to alter a table to define either a range or hash index on a column without a primary key.
Command> CREATE TABLE illegalindex (Ccl1 CHAR (20)); Command> ALTER TABLE illegalindex USE RANGE INDEX; 2810: The table has no primary key so cannot change its index type The command failed. Command> ALTER TABLE illegalindex USE HASH INDEX PAGES = CURRENT; 2810: The table has no primary key so cannot change its index type The command failed.
These examples show how time resolution works with aging. In this example, lifetime is 3 days.
If (SYSDATE - ColumnValue) <= 3
, do not age out the row.
If (SYSDATE - ColumnValue) > 3
, then the row is a candidate for aging.
If (SYSDATE - ColumnValue) = 3 day
s, 22 hours, then row is not aged out because lifetime was specified in days. The row would be aged out if lifetime had been specified as 72 hours.
This example alters a table by adding LRU aging. The table has no previous aging policy. The aging state is ON
by default.
ALTER TABLE agingdemo3 ADD AGING LRU; Command> DESCRIBE agingdemo3; Table USER.AGINGDEMO3: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE Aging lru on 1 table found. (primary key columns are indicated with *)
This example alters a table by adding time-based aging. The table has no previous aging policy. The agingcolumn
column is used for aging. LIFETIME
is 2 days. CYCLE
is 30 minutes.
ALTER TABLE agingdemo4 ADD AGING USE agingcolumn LIFETIME 2 DAYS CYCLE 30 MINUTES; Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on
This example illustrates that after you create an aging policy, you cannot change it. You must drop aging and redefine.
CREATE TABLE agingdemo5 (agingid NUMBER NOT NULL PRIMARY KEY ,name VARCHAR2 (20) ,agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; ALTER TABLE agingdemo5 ADD AGING LRU; 2980: Cannot add aging policy to a table with an existing aging policy. Have to drop the old aging first The command failed.
Drop aging on the table and redefine with LRU aging.
ALTER TABLE agingdemo5 DROP AGING; ALTER TABLE agingdemo5 ADD AGING LRU; Command> DESCRIBE agingdemo5; Table USER.AGINGDEMO5: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging lru on 1 table found. (primary key columns are indicated with *)
This example alters a table by setting the aging state to OFF
. The table has been defined with a time-based aging policy. If you set the aging state to OFF
, aging is not done automatically. This is useful to use an external scheduler to control the aging process. Set aging state to OFF
and then call the ttAgingScheduleNow
procedure to start the aging process.
Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes on ALTER TABLE AgingDemo4 SET AGING OFF;
Note that when you describe agingdemo4
, the aging policy is defined and the aging state is set to OFF
.
Command> DESCRIBE agingdemo4; Table USER.AGINGDEMO4: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 2 days cycle 30 minutes off 1 table found. (primary key columns are indicated with *)
Call ttAgingScheduleNow
to invoke aging with an external scheduler:
Command> CALL ttAgingScheduleNow ('agingdemo4');
Attempt to alter a table adding the aging column and then use that column for time-based aging. An error is generated.
Command> DESCRIBE x; Table USER1.X: Columns: *ID TT_INTEGER NOT NULL 1 table found. (primary key columns are indicated with *) Command> ALTER TABLE x ADD COLUMN t TIMESTAMP; Command> ALTER TABLE x ADD AGING USE t LIFETIME 2 DAYS; 2993: Aging column cannot be nullable The command failed.
Attempt to alter the LIFETIME
clause for a table defined with time-based aging. The aging column is defined with data type TT_DATE
. An error is generated because the LIFETIME
unit is not expressed in DAYS
.
Command> CREATE TABLE aging1 (col1 TT_DATE NOT NULL) AGING USE col1 LIFETIME 2 DAYS; Command> ALTER TABLE aging1 SET AGING LIFETIME 2 HOURS; 2977: Only DAY lifetime unit is allowed with a TT_DATE column The command failed.
Alter the employees
table to add a new compressed column of state
, which contains the full name of the state. Note that the employees
table already has a compressed column group consisting of job_id
and manager_id
.
Command> ALTER TABLE employees ADD COLUMN state VARCHAR2(20) COMPRESS (state BY DICTIONARY); Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) STATE VARCHAR2 (20) INLINE COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY, STATE BY DICTIONARY ) OPTIMIZED FOR READ 1 table found. (primary key columns are indicated with *)
The following example drops the compressed column state
from the employees
table:
Command> ALTER TABLE employees DROP state; Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( ( JOB_ID, MANAGER_ID ) BY DICTIONARY ) OPTIMIZED FOR READ 1 table found. (primary key columns are indicated with *)
CREATE TABLE
DROP TABLE
"Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide
The ALTER USER
statement enables a user to change the user's own password. A user with the ADMIN
privilege can change another user's password.
This statement also enables a user to change another user from internal to external or from external to internal.
No privilege is required to change the user's own password.
ADMIN
privilege is required to change another user's password.
ADMIN
privilege is required to change users from internal to external and from external to internal.
ALTER USER user IDENTIFIED BY {password | "password"} ALTER USER user IDENTIFIED EXTERNALLY
Parameter | Description |
---|---|
user |
Name of the user whose password is being changed. |
IDENTIFIED BY |
Identification clause. |
password |" password " |
Specifies the password that identifies the internal user to the TimesTen database. |
EXTERNALLY |
Identifies the operating system user to the TimesTen database. To perform database operations as an external user, the process needs a TimesTen external user name that matches the user name authenticated by the operating system or network. A password is not required by TimesTen because the user has been authenticated by the operating system at login time. |
Database users can be internal or external.
Internal users are defined for a TimesTen database.
External users are defined by an external authority, such as the operating system. External users cannot be assigned a TimesTen password.
If you are an internal user connected as user
, execute this statement to change your TimesTen password.
Passwords are case-sensitive.
You cannot alter a user across a client/server connection. You must use a direct connection when altering a user.
When replication is configured, this statement is replicated.
To change the password for internal user terry
to "12345"
from its current setting, use:
ALTER USER terry IDENTIFIED BY "12345"; User altered.
To change user terry
to an external user:
ALTER USER terry IDENTIFIED EXTERNALLY; User altered.
To change user terry
back to an internal user, provide a password:
ALTER USER terry IDENTIFIED BY "secret"; User altered.
CREATE USER
DROP USER
GRANT
REVOKE
Use the CALL
statement to invoke a TimesTen built-in procedure or to execute a PL/SQL procedure or function that is standalone or part of a package from within SQL.
The privileges required for invoking each TimesTen built-in procedure are listed in the description of each procedure in the "Built-In Procedures" section in the Oracle TimesTen In-Memory Database Reference.
No privileges are required for an owner calling its own PL/SQL procedure or function that is standalone or part of a package using the CALL
statement. For all other users, the EXECUTE
privilege on the procedure or function or on the package in which it is defined is required.
To call a TimesTen built-in procedure:
CALL [TimesTenBuiltIn [( arguments )]
When calling PL/SQL procedures or functions that are standalone or part of a package, you can either call these by name or as the result of an expression.
To call a PL/SQL procedure:
CALL [Owner.][Package.]ProcedureName [( arguments )]
To call a PL/SQL function that returns a parameter, one of the following are appropriate:
CALL [Owner.][Package.]FunctionName [( arguments )] INTO :return_param
Note:
A user's own PL/SQL procedure or function takes precedence over a TimesTen built-in procedure with the same name.Parameter | Description |
---|---|
TimesTenBuiltIn |
Name of the TimesTen built-in procedure. For a full list of TimesTen built-in procedures, see "Built-In Procedures" in the Oracle TimesTen In-Memory Database Reference. |
[ Owner .] ProcedureName |
Name of the PL/SQL procedure. You can optionally specify the owner of the procedure. |
[ Owner .] FunctionName |
Name of the PL/SQL function. You can optionally specify the owner of the function. |
arguments |
Specify 0 or more arguments for the PL/SQL procedure or function. |
INTO |
If the routine is a function, the INTO clause is required. |
return_param |
Specify the host variable that stores the return value of the function. |
Detailed information on how to execute PL/SQL procedures or functions with the CALL
statement in TimesTen is provided in "How to execute PL/SQL procedures and functions" in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide, "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database C Developer's Guide, or "Using CALL to execute procedures and functions" in the Oracle TimesTen In-Memory Database Java Developer's Guide.
The following is the definition of the mytest
function:
create or replace function mytest return number is begin return 1; end; /
Perform the following to execute the mytest
function in a CALL
statement:
Command> variable n number; Command> call mytest() into :n; Command> print n; N : 1
The following example creates a function that returns the salary of the employee whose employee ID is specified as input, then calls the function and displays the result that was returned.
Command> CREATE OR REPLACE FUNCTION get_sal > (p_id employees.employee_id%TYPE) RETURN NUMBER IS > v_sal employees.salary%TYPE := 0; > BEGIN > SELECT salary INTO v_sal FROM employees > WHERE employee_id = p_id; > RETURN v_sal; > END get_sal; > / Function created. Command> variable n number; Command> call get_sal(100) into :n; Command> print n; N : 24000
The COMMIT
statement ends the current transaction and makes permanent all changes performed in the transaction.
None
COMMIT [WORK]
The COMMIT
statement enables the following optional keyword:
Parameter | Description |
---|---|
[WORK] |
Optional clause supported for compliance with the SQL standard. COMMIT and COMMIT WORK are equivalent. |
Until you commit a transaction:
You can see any changes you have made during the transaction but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
You can roll back (undo) changes made during the transaction with the ROLLBACK
statement.
This statement releases transaction locks.
For passthrough, the Oracle Database transaction will also be committed.
A commit closes all open cursors.
Insert row into regions
table of the HR
schema and commit transaction. First set autocommit to 0:
Command> SET AUTOCOMMIT 0; Command> INSERT INTO regions VALUES (5,'Australia'); 1 row inserted. Command> COMMIT; Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > < 5, Australia > 5 rows found.
This statement creates an active standby pair. It includes an active master database, a standby master database, and may also include one or more read-only subscribers. The active master database replicates updates to the standby master database, which propagates the updates to the subscribers.
ADMIN
CREATE ACTIVE STANDBY PAIR FullStoreName, FullStoreName [ReturnServiceAttribute] [SUBSCRIBER FullStoreName [,...]] [STORE FullStoreName [StoreAttribute [...]]] [NetworkOperation [...] ] [{ INCLUDE | EXCLUDE }{TABLE [[Owner.]TableName [,...]]| CACHE GROUP [[Owner.]CacheGroupName [,...]]| SEQUENCE [[Owner.]SequenceName [,...]]} [,...]]
The syntax for ReturnServiceAttribute
is
{ RETURN RECEIPT [BY REQUEST] | RETURN TWOSAFE [BY REQUEST] | NO RETURN }
Syntax for StoreAttribute
is:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN MilliSeconds LOCAL COMMIT ACTION {NO ACTION | COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value TABLE DEFINITION CHECKING {RELAXED|EXACT}
Syntax for NetworkOperation
:
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...]
Parameter | Description |
---|---|
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to an active master database are blocked until the transaction is received by the standby master database.
Specifying |
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to an active master database are blocked until the transaction is committed on the standby master database.
Specifying For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures .
Specifying This failure policy can be specified for either the See "Managing return service timeout errors and replication state changes" in Oracle TimesTen In-Memory Database Replication Guide for details. |
RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
See "Establishing return service failure/recovery policies" in Oracle TimesTen In-Memory Database Replication Guide. |
RESUME RETURN Milliseconds |
If DISABLE RETURN has disabled return service blocking, this attribute sets the policy for when to re-enable the return service. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. A value of 0 means that there is no waiting. The default value is 10 seconds.
The application can override this timeout setting by using the |
SUBSCRIBER FullStoreName [,...]] |
A database that receives updates from a master database. FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
STORE FullStoreName [ StoreAttribute [...]] |
Defines the attributes for the specified database. Attributes include PORT , TIMEOUT and FAILTHRESHOLD . FullStoreName is the database file name specified in the DataStore attribute of the DSN description. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
StoreAttribute clause.
Specifies type of table definition checking that occurs on the subscriber:
The default is |
{INCLUDE | EXCLUDE}
|
An active standby pair replicates an entire database by default.
|
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
DURABLE COMMIT {ON | OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. It also enables durable commits when the ttRepStateSave built-in procedure has marked the standby database as failed. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.The value 0 means "No Limit." This is the default.
See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
LOCAL COMMIT ACTION
|
Specifies the default action to be taken for a return twosafe transaction in the event of a timeout.
Note: This attribute is valid only when the
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
PORT PortNumber |
The TCP/IP port number on which the replication agent for the database listens for connections. If not specified, the replication agent automatically allocates a port number.
In an active standby pair, the standby master database listens for updates from the active master database. Read-only subscribers listen for updates from the standby master database. |
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Denotes the NetworkOperation clause. If specified, enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores. In the context of the ROUTE clause, you can define the following:
When using active standby pairs, For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from the database. Default: 120 seconds.
In an active standby pair, the active master database sends messages to the standby master database. The standby master database sends messages to the read-only subscribers. |
After you create an active standby pair, make one of your databases the active database. To accomplish this, call ttRepStateSet
('ACTIVE')
. Then use ttRepAdmin
to duplicate the active database to the second database. When the operation is successful, the second database becomes the standby database. For more information, see "Setting up an active standby pair with no cache groups" in Oracle TimesTen In-Memory Database Replication Guide.
The SUBSCRIBER
clause lists one or more read-only subscriber databases. You can designate up to 127 subscriber databases.
Replication between the active master database and the standby master database can be RETURN TWOSAFE
, RETURN RECEIPT
, or asynchronous. RETURN TWOSAFE
ensures no transaction loss.
Use the INCLUDE
and EXCLUDE
clauses to exclude the listed tables, sequences and cache groups from replication, or to include only the listed tables, sequences and cache groups, excluding all others.
If the active standby pair has the RETURN TWOSAFE
attribute and replicates a cache group, a transaction may fail if:
The transaction that is being replicated contains an ALTER TABLE
statement or an ALTER CACHE GROUP
statement.
The transaction contains an INSERT
, UPDATE
or DELETE
statement on a replicated table, replicated cache group or an asynchronous writethrough cache group.
You can use an active standby pair to replicate read-only cache groups and asynchronous writethrough (AWT) cache groups. You cannot use an active standby pair to replicate synchronous writethrough (SWT) cache groups or user managed cache groups.
You cannot execute the CREATE ACTIVE STANDBY PAIR
statement when Oracle Clusterware is used with TimesTen.
This example creates an active standby pair whose master databases are rep1
and rep2
. There is one subscriber, rep3
. The type of replication is RETURN RECEIPT
. The statement also sets PORT
and TIMEOUT
attributes for the master databases.
CREATE ACTIVE STANDBY PAIR rep1, rep2 RETURN RECEIPT SUBSCRIBER rep3 STORE rep1 PORT 21000 TIMEOUT 30 STORE rep2 PORT 22000 TIMEOUT 30;
Specify NetworkOperation
clause to control network interface:
CREATE ACTIVE STANDBY PAIR rep1,rep2 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1; ROUTE MASTER rep2 ON "machine2" SUBSCRIBER rep1 ON "machine1" MASTERIP "2.2.2.2" PRIORITY 1 SUBSCRIBERIP "1.1.1.1" PRIORITY 1;
ALTER ACTIVE STANDBY PAIR
DROP ACTIVE STANDBY PAIR
The CREATE CACHE GROUP
statement:
Creates the table defined by the cache group.
Loads all new information associated with the cache group in the appropriate system tables.
A cache group is a set of tables related through foreign keys that cache data from tables in an Oracle database. There is one root table that does not reference any of the other tables. All other cache tables in the cache group reference exactly one other table in the cache group. In other words, the foreign key relationships form a tree.
A cache table is a set of rows satisfying the conditions:
The rows constitute a subset of the rows of a vertical partition of an Oracle database table.
The rows are stored in a TimesTen table with the same name as the Oracle database table.
If a database has more than one cache group, the cache groups must correspond to different Oracle database (and TimesTen) tables.
Cache group instance refers to a row in the root table and all the child table rows related directly or indirectly to the root table rows.
User managed and system managed cache groups
A cache group can be either system managed or user managed.
A system managed cache group is fully managed by TimesTen and has fixed properties. System managed cache group types include:
Read-only cache groups are updated in the Oracle database, and the updates are propagated from the Oracle database to the cache.
Asynchronous writethrough (AWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions continue executing on the cache without waiting for a commit on the Oracle database.
Synchronous writethrough (SWT) cache groups are updated in the cache and the updates are propagated to the Oracle database. Transactions are committed on the cache after notification that a commit has occurred on the Oracle database.
Because TimesTen manages system managed cache groups, including loading and unloading the cache group, certain statements and clauses cannot be used in the definition of these cache groups, including:
WHERE
clauses in AWT and SWT cache group definitions
READONLY
, PROPAGATE
and NOT PROPAGATE
in cache table definitions
AUTOREFRESH
in AWT and SWT cache group definitions
The FLUSH CACHE GROUP
and REFRESH CACHE GROUP
operations are not allowed for AWT and SWT cache groups.
You must stop the replication agent before creating an AWT cache group.
A user managed cache group must be managed by the application or user. PROPAGATE
in a user managed cache group is synchronous. The table-level READONLY
keyword can only be used for user managed cache groups.
In addition, both TimesTen and Oracle Database must be able to parse all WHERE
clauses.
Explicitly loaded cache groups and dynamic cache groups
Cache groups can be explicitly or dynamically loaded.
In cache groups that are explicitly loaded, new cache instances are loaded manually into the TimesTen cache tables from the Oracle database tables using a LOAD CACHE GROUP
or REFRESH CACHE GROUP
statement or automatically using an autorefresh operation.
In a dynamic cache group, new cache instances can be loaded manually into the TimesTen cache tables by using a LOAD CACHE GROUP
or on demand using a dynamic load operation. In a dynamic load operation, data is automatically loaded into the TimesTen cache tables from the cached Oracle database tables when a SELECT
, UPDATE
, DELETE
or INSERT
statement is issued on one of the cache tables, where the data is not present in the cache table but does exist in the cached Oracle database table. A manual refresh or automatic refresh operation on a dynamic cache group can result in the updating or deleting of existing cache instances, but not in the loading of new cache instances.
Any cache group type (read-only, asynchronous writethrough, synchronous writethrough, user managed) can be defined as an explicitly loaded cache group.
Any cache group type can be defined as a dynamic cache group except a user managed cache group that has both the AUTOREFRESH
cache group attribute and the PROPAGATE
cache table attribute.
Data in a dynamic cache group is aged out because LRU aging is defined by default. Use the ttAgingLRUConfig
built-in procedure to override the space usage thresholds for LRU aging. You can also define time-based aging on a dynamic cache group to override LRU aging.
For more information on explicitly loaded and dynamic cache groups, see Oracle In-Memory Database Cache User's Guide. For more information about the dynamic load operation, see "Dynamically loading a cache instance" in Oracle In-Memory Database Cache User's Guide.
You can create either local or global cache groups.
In a local cache group, data in the cache tables are not shared across TimesTen databases even if the databases are members of the same cache grid. Therefore, the databases can have overlapping data or the same data. Any cache group type can be defined as a local cache group. A local cache group can be either dynamically or explicitly loaded.
In a global cache group, data in the cache tables are shared among TimesTen databases within a cache grid. Updates to the same data by different grid members are coordinated by the grid. Only an AWT cache group can be defined as a global cache group.
For more information on local and global cache groups, see "Defining Cache Groups" in the Oracle In-Memory Database Cache User's Guide. In addition, see "Example of data sharing among the grid members" in Oracle In-Memory Database Cache User's Guide.
CREATE CACHE GROUP
or CREATE ANY CACHE GROUP
and CREATE TABLE
(if all tables in the cache group are owned by the current user) or CREATE ANY TABLE
(if at least one of the tables in the cache group is not owned by the current user).
There are CREATE CACHE GROUP
statements for each type of cache group:
There is one CREATE CACHE GROUP
statement to create a global cache group:
For read-only cache groups, the syntax is:
CREATE [DYNAMIC] READONLY CACHE GROUP [Owner.]GroupName [AUTOREFRESH [MODE {INCREMENTAL | FULL}] [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }] [STATE {ON|OFF|PAUSED}] ] FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [,FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...]) [ON DELETE CASCADE] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] | DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] [ON|OFF] ] [WHERE ExternalSearchCondition] } [,...];
CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP
For asynchronous writethrough cache groups, the syntax is:
CREATE [DYNAMIC] [ASYNCHRONOUS] WRITETHROUGH CACHE GROUP [Owner.]GroupName FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...])] [ ON DELETE CASCADE ] UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] } [,...];
CREATE SYNCHRONOUS WRITETHROUGH CACHE GROUP
For synchronous writethrough cache groups, the syntax is:
CREATE [DYNAMIC] SYNCHRONOUS WRITETHROUGH CACHE GROUP [Owner.]GroupName FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...])}] [ ON DELETE CASCADE ] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] } [,...];
CREATE USERMANAGED CACHE GROUP
For user managed cache groups, the syntax is:
CREATE [DYNAMIC][USERMANAGED] CACHE GROUP [Owner.]GroupName [AUTOREFRESH [MODE {INCREMENTAL | FULL}] [INTERVAL IntervalValue {MINUTE[S] | SECOND[S] | MILLISECOND[S] }] [STATE {ON|OFF|PAUSED}] ] FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName[,...]) REFERENCES RefTableName (ColumnName [,...])] [ON DELETE CASCADE] [, {READONLY | PROPAGATE | NOT PROPAGATE}] [UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] [WHERE ExternalSearchCondition] } [,...];
CREATE WRITETHROUGH GLOBAL CACHE GROUP
The following syntax demonstrates how to create a global cache group to cache data within a cache grid. Specify the DYNAMIC
attribute to enable dynamic load from the Oracle database for the cache group.
CREATE [DYNAMIC] [ASYNCHRONOUS] WRITETHROUGH GLOBAL CACHE GROUP [Owner.]GroupName FROM {[Owner.]TableName ( {ColumnDefinition[,...]} [,PRIMARY KEY(ColumnName[,...])] [FOREIGN KEY(ColumnName [,...]) REFERENCES RefTableName (ColumnName [,...])] [ ON DELETE CASCADE ] UNIQUE HASH ON (HashColumnName[,...]) PAGES=PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] } [,...];
Following are the parameters for the cache group definition before the FROM
keyword:
Parameter | Description |
---|---|
[ Owner .] GroupName |
Owner and name assigned to the new cache group. |
[DYNAMIC] |
If specified, a dynamic cache group is created. |
AUTOREFRESH |
The AUTOREFRESH parameter automatically propagates changes from the Oracle database to the cache group. For details, see "AUTOREFRESH in cache groups". |
MODE [INCREMENTAL | FULL] |
Determines which rows in the cache are updated during an autorefresh. If the INCREMENTAL clause is specified, TimesTen refreshes only rows that have been changed on the Oracle database since the last propagation. If the FULL clause is specified, TimesTen updates all rows in the cache with each autorefresh. The default autorefresh mode is INCREMENTAL . |
INTERVAL IntervalValue |
Indicates the interval at which autorefresh should occur in units of minutes, seconds or milliseconds. IntervalValue is an integer value that specifies how often autorefresh should be scheduled, in MINUTES , SECONDS or MILLISECONDS . The default IntervalValue value is 5 minutes. If the specified interval is not long enough for an autorefresh to complete, a runtime warning is generated and the next autorefresh waits until the current one finishes. An informational message is generated in the support log if the wait queue reaches 10. |
STATE [ON | OFF | PAUSED] |
Specifies whether autorefresh should be ON or OFF or PAUSED when the cache group is created. You can alter this setting later by using the ALTER CACHE GROUP statement. By default, the AUTOREFRESH state is PAUSED . |
FROM |
Designates one or more table definitions for the cache group. |
Everything after the FROM
keyword comprises the definitions of the Oracle database tables cached in the cache group. The syntax for each table definition is similar to that of a CREATE TABLE
statement. However, primary key constraints are required for the cache group table.
Table definitions have the following parameters.
Parameter | Description |
---|---|
[ Owner .] TableName |
Owner and name to be assigned to the new table. If you do not specify the owner name, your login becomes the owner name for the new table. |
ColumnDefinition |
Name of an individual column in a table, its data type and whether it is nullable. Each table must have at least one column. See "Column Definition". |
PRIMARY KEY ( ColumnName [,...]) |
Specifies that the table has a primary key. Primary key constraints are required for a cache group. ColumnName is the name of the column that forms the primary key for the table to be created. Up to 16 columns can be specified for the primary key. Cannot be specified with UNIQUE in one specification. |
FOREIGN KEY ( ColumnName [,...]) |
Specifies that the table has a foreign key. ColumnName is the name of the column that forms the foreign key for the table to be created. See "FOREIGN KEY ". |
REFERENCES RefTableName ( ColumnName [,...]) |
Specifies the table which the foreign key is associated with. RefTableName is the name of the referenced table and ColumnName is the name of the column referenced in the table. |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
READONLY |
Specifies that changes cannot be made on the cached table. |
PROPAGATE| NOT PROPAGATE |
Specifies whether changes to the cached table are automatically propagate to the corresponding Oracle database table at commit time. |
UNIQUE HASH ON ( HashColumnName ) |
Specifies that a hash index is created on this table. HashColumnName identifies the column that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key. |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for For more information on hash indexes, see "CREATE TABLE". |
WHERE ExternalSearchCondition |
The WHERE clause evaluated by the Oracle database for the cache group table. This WHERE clause is added to every LOAD and REFRESH operation on the cache group. It may not directly reference other tables. It is parsed by both TimesTen and Oracle Database. See "Using a WHERE clause" in Oracle In-Memory Database Cache User's Guide. |
AGING LRU [ON | OFF] |
If specified, defines the LRU aging policy on the root table. The LRU aging policy applies to all tables in the cache group. The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF ) and the LRU aging attributes.
Set the aging state to either In dynamic cache groups, LRU aging is set LRU aging cannot be specified on a cache group with the autorefresh attribute, unless the cache group is dynamic. LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in a cache group" in Oracle In-Memory Database Cache User's Guide. |
AGING USE ColumnName ...[ON|OFF] |
If specified, defines the time-based aging policy on the root table. The time-based aging policy applies to all tables in the cache group. The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF ) and the time-based aging attributes.
Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column used for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be Aging is disabled by default on an explicitly loaded global cache group. For more information about time-based aging, see "Implementing aging in a cache group" in Oracle In-Memory Database Cache User's Guide. |
LIFETIME Num1 {SECOND[S]|MINUTE[S]|HOUR[S]DAY[S]} |
LIFETIME is a time-based aging attribute and is a required clause.
Specify the The Specify The concept of time resolution is supported. If |
[CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S]| DAY[S]}] |
CYCLE is a time-based aging attribute and is optional. Specify the CYCLE clause after the LIFETIME clause.
The Specify If you do not specify the If the aging state is |
Two cache groups cannot have the same owner name and group name. If you do not specify the owner name, your login becomes the owner name for the new cache group.
Neither a cache table name nor a cache group name can contain #.
Dynamic parameters are not allowed in the WHERE
clause.
Oracle Database temporary tables cannot be cached.
Each table must correspond to a table in the Oracle database.
In the Oracle database, you can define a parent/child relationship and then insert a null value into the foreign key column of the child table. This means this row in the child table references a null parent. You can then create a cache group and cache the parent/child relationship of the Oracle database tables. However, if you load data from the Oracle database tables into the cache group, the row that contains the null value of the foreign key column is not loaded. TimesTen recommends that you do not create cache groups if the tables you cache define a parent/child relationship in which the foreign key represents a null parent.
You cannot use lowercase delimited identifiers to name your cache tables. Table names in TimesTen are case-insensitive and are stored as uppercase. The name of the cache table must be the same as the Oracle database table name. Uppercase table names on TimesTen will not match mixed case table names on the Oracle database. As a workaround, create a synonym for your table in the Oracle database and use that synonym as the table name for the cache group. This workaround is not available for read-only cache groups or cache groups with the AUTOREFRESH
parameter set.
Each column in the cache table must match each column in the Oracle database table, both in name and in data type. See "Mappings between Oracle Database and TimesTen data types" in Oracle In-Memory Database Cache User's Guide. In addition, each column name must be fully qualified with an owner and table name when referenced in a WHERE
clause.
The WHERE
clause can only directly refer to the cache group table. Tables that are not in the cache group can only be referenced with a subquery.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the CREATE CACHE GROUP
, LOAD CACHE GROUP
, UNLOAD CACHE GROUP
, REFRESH CACHE GROUP
or FLUSH CACHE GROUP
statements. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
By default, a range index is created to enforce the primary key for a cache group table. Use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries over a cache group table's primary key, then choose a range index for that cache group table by omitting the UNIQUE HASH
clause.
If, however, your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause. See "CREATE TABLE" for more information on the UNIQUE HASH
clause.
Use ALTER TABLE
to change the representation of the primary key index for a table.
For cache group tables with the PROPAGATE
attribute and for tables of SWT and AWT cache groups, foreign keys specified with ON DELETE CASCADE
must be a proper subset of foreign keys with ON DELETE CASCADE
in the Oracle database tables.
You cannot execute the CREATE CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
The AUTOREFRESH
parameter automatically propagates changes from the Oracle database to TimesTen cache groups. For explicitly loaded cache groups, deletes, updates and inserts are automatically propagated from the Oracle database to the cache group. For dynamic cache groups, only deletes and updates are propagated. Inserts to the specified Oracle database tables are not propagated to dynamic cache groups. They are dynamically loaded into IMDB Cache when referenced by the application. They can also be explicitly loaded by the application.
To use autorefresh with a cache group, you must specify AUTOREFRESH
when you create the cache group. You can change the MODE
, STATE
and INTERVAL
AUTOREFRESH
settings after a cache group has been created by using the ALTER CACHE GROUP
command. Once a cache group has been specified as either AUTOREFRESH
or PROPAGATE
, you cannot change these attributes.
TimesTen supports FULL
or INCREMENTAL AUTOREFRESH
. In FULL
mode, the entire cache is periodically unloaded and then reloaded. In INCREMENTAL
mode, TimesTen installs triggers in the Oracle database to track changes and periodically updates only the rows that have changed in the specified Oracle database tables. The first incremental refresh is always a full refresh, unless the autorefresh state is PAUSED
. The default mode is INCREMENTAL
.
FULL AUTOREFRESH
is more efficient when most of the Oracle database table rows have been changed. INCREMENTAL AUTOREFRESH
is more efficient when there are fewer changes.
TimesTen schedules an autorefresh operation when the transaction that contains a statement with AUTOREFRESH
specified is committed. The statement types that cause autorefresh to be scheduled are:
A CREATE CACHE GROUP
statement in which AUTOREFRESH
is specified, and the AUTOREFRESH
state is specified as ON
.
An ALTER CACHE GROUP
statement in which the AUTOREFRESH
state has been changed to ON
.
A LOAD CACHE GROUP
statement on an empty cache group whose autorefresh state is PAUSED
.
The specified interval determines how often autorefresh occurs.
The current STATE
of AUTOREFRESH
can be ON
, OFF
or PAUSED
. By default, the autorefresh state is PAUSED
.
The NOT PROPAGATE
attribute cannot be used with the AUTOREFRESH
attribute.
You can implement sliding windows with time-based aging. See "Configuring a sliding window" in Oracle In-Memory Database Cache User's Guide.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
LRU and time-based aging can be combined in one system. If you use only LRU aging, the aging thread wakes up based on the cycle specified for the whole database. If you use only time-based aging, the aging thread wakes up based on an optimal frequency. This frequency is determined by the values specified in the CYCLE
clause for all tables. If you use both LRU and time-based aging, then the thread wakes up based on a combined consideration of both types.
Call the ttAgingScheduleNow
procedure to schedule the aging process right away regardless if the aging state is ON
or OFF
.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT...SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
For LRU aging, if a child row is not a candidate for aging, then neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
Specify either the LRU aging or time-based aging policy on the root table. The policy applies to all tables in the cache group.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
Restrictions on defining aging for a cache group:
LRU aging is not supported on a cache group defined with the autorefresh attribute, unless it is a dynamic cache group.
Aging is disabled by default on an explicitly loaded global cache group.
The aging policy cannot be added, altered, or dropped for read-only cache groups or cache groups with the AUTOREFRESH
attribute while the cache agent is active. Stop the cache agent first.
You cannot drop the column that is used for time-based aging.
To cache data in a cache grid, you must create an asynchronous writethrough global cache group. Before you can create this cache group, the TimesTen database must be associated with a cache grid. For more information on creating and using a cache grid and creating and using global cache groups, see "Configuring a cache grid" and "Global cache groups" in Oracle In-Memory Database Cache User's Guide.
Create a read-only cache group:
CREATE READONLY CACHE GROUP customerorders AUTOREFRESH INTERVAL 10 MINUTES FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, region CHAR(10), PRIMARY KEY(custid)), ordertab (orderid INT NOT NULL, custid INT NOT NULL, PRIMARY KEY (orderid), FOREIGN KEY (custid) REFERENCES customer(custid));
Create an asynchronous writethrough cache group:
CREATE ASYNCHRONOUS WRITETHROUGH CACHE GROUP cstomers FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid));
Create a synchronous writethrough cache group:
CREATE SYNCHRONOUS WRITETHROUGH CACHE GROUP customers FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid));
Create a user managed cache group:
CREATE USERMANAGED CACHE GROUP updateanywherecustomers AUTOREFRESH MODE INCREMENTAL INTERVAL 30 SECONDS STATE ON FROM customer (custid INT NOT NULL, name CHAR(100) NOT NULL, addr CHAR(100), zip INT, PRIMARY KEY(custid), PROPAGATE);
Create a cache group with time-based aging. Specify agetimestamp
as the column for aging. Specify LIFETIME
2 hours, CYCLE
30 minutes. Aging state is not specified, so the default setting (ON
) is used.
CREATE READONLY CACHE GROUP agingcachegroup AUTOREFRESH MODE INCREMENTAL INTERVAL 5 MINUTES STATE PAUSED FROM customer (customerid NUMBER NOT NULL, agetimestamp TIMESTAMP NOT NULL, PRIMARY KEY (customerid)) AGING USE agetimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES; Command> DESCRIBE customer; Table USER.CUSTOMER: Columns: *CUSTOMERID NUMBER NOT NULL AGETIMESTAMP TIMESTAMP (6) NOT NULL AGING USE AgeTimestamp LIFETIME 2 HOURS CYCLE 30 MINUTES ON 1 table found. (primary key columns are indicated with *)
Use a synonym for a mixed case delimited identifier table name in the Oracle database so the mixed case table name can be cached in TimesTen. First attempt to cache the mixed case Oracle database table name. You see the error "Could not find '
NameofTable
' in Oracle"
:
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE TABLE "MixedCase" (col1 NUMBER PRIMARY KEY NOT NULL); Command> INSERT INTO "MixedCase" VALUES (1); 1 row inserted. Command> COMMIT; Command> CREATE CACHE GROUP MixedCase1 from "MixedCase" (col1 NUMBER PRIMARY KEY NOT NULL); 5140: Could not find SAMPLEUSER.MIXEDCASE in Oracle. May not have privileges. The command failed.
Now, using the PassThrough
attribute, create the synonym "MIXEDCASE"
in the Oracle database and use that synonym as the table name.
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE SYNONYM "MIXEDCASE" FOR "MixedCase"; Command> COMMIT; Command> CREATE CACHE GROUP MixedCase2 FROM "MIXEDCASE" (col1 NUMBER PRIMARY KEY NOT NULL); Warning 5147: Cache group contains synonyms Command> COMMIT;
Attempt to use a synonym name with a read-only cache group or a cache group with the AUTOREFRESH
attribute. You see an error:
Command> AUTOCOMMIT 0; Command> PASSTHROUGH 3; Command> CREATE SYNONYM "MIXEDCASE_AUTO" FOR "MixedCase"; Command> COMMIT; Command> CREATE READONLY CACHE GROUP MixedCase3 AUTOREFRESH MODE INCREMENTAL INTERVAL 10 MINUTES FROM "MIXEDCASE_AUTO" (Col1 NUMBER PRIMARY KEY NOT NULL); 5142: Autorefresh is not allowed on cache groups with Oracle synonyms The command failed.
ALTER CACHE GROUP
ALTER TABLE
DROP CACHE GROUP
FLUSH CACHE GROUP
LOAD CACHE GROUP
UNLOAD CACHE GROUP
The CREATE FUNCTION
statement creates a standalone stored function.
CREATE PROCEDURE
(if owner) or CREATE ANY PROCEDURE
(if not owner).
CREATE [OR REPLACE] FUNCTION [Owner.]FunctionName [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])] RETURN DataType [InvokerRightsClause] [DETERMINISTIC] {IS|AS} PlsqlFunctionBody
The syntax for the InvokerRightsClause
:
AUTHID {CURRENT_USER|DEFINER}
You can specify InvokerRightsClause
or DETERMINISTIC
in any order.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to re-create the function if it already exists. Use this clause to change the definition of an existing function without dropping and re-creating it. When you re-create a function, TimesTen recompiles it. |
FunctionName |
Name of function. |
arguments |
Name of argument or parameter. You can specify 0 or more parameters for the function. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type.
For more information on PL/SQL data types, see Chapter 3, "PL/SQL Data Types" in the Oracle Database PL/SQL Language Reference. |
IN| OUT |IN OUT |
Parameter modes.
|
NOCOPY |
Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. You can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY .
For more information on |
DEFAULT expr |
Use this clause to specify a default value for the parameter. You can specify := in place of the keyword DEFAULT . |
RETURN DataType |
Required clause. A function must return a value. You must specify the data type of the return value of the function.
Do not specify a length, precision, or scale for the data type. The data type is a PL/SQL data type. For more information on PL/SQL data types, see Chapter 3, "PL/SQL Data Types" in the Oracle Database PL/SQL Language Reference. |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see the "Definer's rights and invoker's rights" section in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide or the Oracle Database PL/SQL Language Reference. |
DETERMINISTIC |
Specify DETERMINISTIC to indicate that the function should return the same result value whenever it is called with the same values for its parameters.
For more information on the |
IS|AS |
Specify either IS or AS to declare the body of the function. |
plsql_function_spec |
Specifies the function body. |
TimesTen does not support:
parallel_enable_clause
You can specify this clause, but it has no effect.
call_spec
clause
AS EXTERNAL
clause
In a replication environment, the CREATE FUNCTION
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
When you create or replace a function, the privileges granted on the function remain the same. If you drop and re-create the object, the object privileges that were granted on the original object are revoked.
Create function get_sal
with one input parameter. Return salary
as type NUMBER
.
Command> CREATE OR REPLACE FUNCTION get_sal > (p_id employees.employee_id%TYPE) RETURN NUMBER IS > v_sal employees.salary%TYPE := 0; > BEGIN > SELECT salary INTO v_sal FROM employees > WHERE employee_id = p_id; > RETURN v_sal; > END get_sal; > / Function created.
Oracle Database PL/SQL Language Reference and Oracle Database SQL Language Reference
The CREATE INDEX
statement creates a range, hash, or bitmap index on one or more columns of a table or materialized view.
No privilege is required for owner.
If not the owner, the system privilege, CREATE
ANY
INDEX
, or the object privilege, INDEX
, is required.
To create a range or bitmap index:
CREATE [UNIQUE|BITMAP] INDEX [Owner.]IndexName ON [Owner.]TableName ({ColumnName [ASC | DESC]} [,... ] )
To create a hash index:
CREATE [UNIQUE] HASH INDEX [Owner.]IndexName ON [Owner.]TableName ({ColumnName [ASC | DESC]} [,... ] ) [ PAGES = RowPages | CURRENT ]
TimesTen creates a non-unique range index by default. Specify CREATE
UNIQUE INDEX
to create a unique range index. Specify CREATE BITMAP INDEX
to create a bitmap index.
To create a non-unique hash index, specify CREATE
HASH
INDEX
. To create a unique hash index, specify CREATE
UNIQUE
HASH
INDEX
.
You cannot create an index (range, hash, or bitmap) on LOB columns.
Specify a bitmap index on each column to increase the performance of complex queries that specify multiple predicates on multiple columns connected by the AND
or OR
operator. At runtime, TimesTen finds bitmaps of rows that satisfy each predicate and bitmaps from different predicates are combined using bitwise logical operation and then the resultant bitmaps are converted to qualified rows.
Bitmap indexes are used to satisfy these predicates:
Equality predicates. For example: 'x1 = 1'
Range predicates. For example: 'y1 > 10'
and'z1 BETWEEN 1 and 10'
AND
predicates. For example: 'x1 > 10 AND y1 > 10'
OR
predicates. For example: 'x1 > 10 OR y1 > 10'
Complex predicates with AND or OR. For example: '(x1 > 10 AND y1 > 10) OR (z1 > 10)'
NOT EQUAL
predicate with AND
. For example: 'x1 = 1 and y1 != 1'
Bitmap indexes:
COUNT
(*) optimization counts rowids from bitmaps.
Are used to optimize queries that group by a prefix of columns of the bitmap index.
Are used to optimize distinct queries and order by queries.
Are used in a MERGE
join.
The CREATE INDEX
statement enters the definition of the index in the system catalog and initializes the necessary data structures. Any rows in the table are then added to the index.
If UNIQUE
is specified, all existing rows must have unique values in the indexed column(s).
The new index is maintained automatically until the index is deleted by a DROP INDEX
statement or until the table associated with it is dropped.
Any prepared statements that reference the table with the new index are automatically prepared again the next time they are executed. Then the statements can take advantage, if possible, of the new index.
An index on a temporary table cannot be created by a connection if any other connection has a non-empty instance of the table.
If you are using linguistic comparisons, you can create a linguistic index. A linguistic index uses sort key values and storage is required for these values. Only one unique value for NLS_SORT
is allowed for an index. For more information on linguistic indexes and linguistic comparisons, see "Using linguistic indexes" in Oracle TimesTen In-Memory Database Operations Guide.
If you create indexes that are redundant, TimesTen generates warnings or errors. Call ttRedundantIndexCheck
to see the list of redundant indexes for your tables.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
=2 when you execute the CREATE INDEX
on the active database, the index will be replicated to all databases in the replication scheme. The table on which the index is created must be empty. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
Indexes can be created on over any columns in the table. This includes compressed columns, even columns that exist in separate compression column groups.
To change the size or type of a hash index, drop the hash index and create a new index.
A hash index is created with a fixed size that remains constant for the life of the table. To resize the hash index, drop and recreate the index. A smaller hash index results in more hash collisions. A larger hash index reduces collisions but can waste memory. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.
To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages
parameter of the SET
PAGES
clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).
The maximum number of columns that can be specified for an index is 16.
Using indexes in query processing
Proper indexes can improve query performance. Some queries can benefit from the use of indexes and some queries do not benefit from the use of indexes. Additionally, the choice of indexes for your queries is important.
A range index is ideal for processing range searches and exact matches, especially if most of the values in the index columns are unique. For example, if a range index is defined on columns (C1,C2)
, the index can be used to process the following types of predicates. ConstantOrParam
refers to a constant value or dynamic parameter and range
refers to the operators >,<,>=, or <=:
C1
= ConstantOrParam
AND
C2
= ConstantOrParam
C1
= ConstantOrParam
AND
C2
range
ConstantOrParam
C1
= ConstantOrParam
C1
range
ConstantOrParam
A range index efficiently processes equality and range predicates and efficiently processes sort and group operations. Use range indexes on index columns with many unique values. The order of columns you specify in a range index is relevant. The order of expressions in the predicate of a query that uses the range index is not relevant. When your query is processed, only one range index is used for each scan of your table even if you have defined multiple range indexes on your table.
A hash index efficiently processes equality predicates. You must size your hash index correctly for optimal performance. Use the PAGES
parameter to size your hash index. If you specify a PAGES
value that is too small, a large number of hash collisions may result, leading to performance degradation for statements that access the hash index. The order of columns specified in the hash index is not relevant and the order of expressions in the predicate of the query that uses the hash index is not relevant. If either a hash index or a range index can be used to process a particular equality predicate, the hash index is chosen because a lookup in a hash index is faster than a scan of a range index.
A bitmap index efficiently processes equality and range predicates, especially if you have complex queries with multiple expressions using AND
or OR
operators. Use bitmap indexes for queries where the index columns have few unique values ( meaning the index columns have a large number of duplicate values) and where index columns have low cardinality. The index contains a bitmap for each unique index value, rather than indexing each row separately and each bit in the bitmap corresponds to a row in the table.
When you process a query, multiple bitmap indexes can be used for each scan of your table. Bitmap indexes can process equality and inequality predicates as well as range predicates or a combination of these predicates connected by AND
or OR
operators.
You can influence the indexes used by the optimizer by setting statement level or transaction level optimizer hints. For more information on statement level optimizer hints, see "Statement level optimizer hints". For more information on transaction level optimizer hints, see "ttOptSetFlag", ttOptSetOrder", or "ttOptUseIndex" in the Oracle TimesTen In-Memory Database Reference. You can also use the TimesTen Index Advisor to provide recommendations for indexes, given a specific set of queries or a specific workload. For more information on the index advisor, see "Using the Index Advisor to recommend indexes" in the Oracle TimesTen In-Memory Database Operations Guide.
Create a table and then create a unique hash index on col2
. Do not specify the PAGES
clause. If PAGES
is not specified, the current table page count is used for the size of the hash table. Use INDEXES
to verify the index was created. Insert a row in the table, set SHOWPLAN
to 1 and then verify the optimizer uses the hash index.
Command> CREATE TABLE tab (col1 NUMBER PRIMARY KEY NOT NULL, col2 VARCHAR2 (30)); Command> CREATE UNIQUE HASH INDEX hash1 ON tab (col2); Command> INDEXES; Indexes on table TESTUSER.TAB: HASH1: unique hash index on columns: COL2 TAB: unique range index on columns: COL1 2 indexes found. 2 indexes found on 1 table. Command> INSERT INTO tab VALUES (10, 'ABC'); Command> SHOWPLAN 1; Command> SELECT * FROM tab where col2 = 'ABC'; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkHashScan TBLNAME: TAB IXNAME: HASH1 INDEXED CONDITION: TAB.COL2 = 'ABC' NOT INDEXED: <NULL> < 10, ABC > 1 row found.
Create a table and create a non-unique hash index on col1
. Use PAGES = CURRENT
to use the current table page count to size the hash index. Use INDEXES
to verify the non-unique hash index is created.
Command> CREATE TABLE tab2 (col1 NUMBER); Command> CREATE HASH INDEX hash_index ON tab2 (col1) PAGES = CURRENT; Command> INDEXES; Indexes on table TESTUSER.TAB2: HASH_INDEX: non-unique hash index on columns: COL1 1 index found. 1 index found on 1 table.
Create table and create unique hash index on col3
. Use PAGES = 100
to specify a page count of 100 for the size of the hash table. Use INDEXES
to verify the unique hash index is created.
Command> CREATE TABLE tab3 (col1 NUMBER, col2 NUMBER, col3 TT_INTEGER); Command> CREATE UNIQUE HASH INDEX unique_hash1 on tab3 (col3) PAGES = 100; Command> INDEXES; Indexes on table TESTUSER.TAB3: UNIQUE_HASH1: unique hash index on columns: COL3 1 index found. 1 index found on 1 table.
Create a table and then create a bitmap index on the column id
. Use the ttIsql
SHOWPLAN
command to verify that the bitmap index is used in the query.
Command> CREATE TABLE tab1 (id NUMBER); Command> INSERT INTO tab1 VALUES (10); 1 row inserted. Command> INSERT INTO tab1 VALUES (20); 1 row inserted. Command> CREATE BITMAP INDEX bitmap_id ON tab1 (id); Command> COMMIT; Command> SET AUTOCOMMIT OFF; Command> SHOWPLAN 1; Command> SELECT * FROM tab1 WHERE id = 10; Query Optimizer Plan: STEP: 1 LEVEL: 1 OPERATION: RowLkBitmapScan TBLNAME: TAB1 IXNAME: BITMAP_ID INDEXED CONDITION: TAB1.ID = 10 NOT INDEXED: <NULL> < 10 > 1 row found.
The regions
table in the HR
schema creates a unique index on region_id
. Issue the ttIsql
INDEXES
command on table regions
. You see the unique range index regions
.
Command> INDEXES REGIONS; Indexes on table SAMPLEUSER.REGIONS: REGIONS: unique range index on columns: REGION_ID (referenced by foreign key index COUNTR_REG_FK on table SAMPLEUSER.COUNTRIES) 1 index found. 1 index found on 1 table.
Attempt to create a unique index i
on table regions
indexing on column region_id
. You see a warning message.
Command> CREATE UNIQUE INDEX i ON regions (region_id); Warning 2232: New index I is identical to existing index REGIONS; consider dropping index I
Call ttRedundantIndexCheck
to see warning message for this index:
Command> CALL ttRedundantIndexCheck ('regions'); < Index SAMPLEUSER.REGIONS.I is identical to index SAMPLEUSER.REGIONS.REGIONS; consider dropping index SAMPLEUSER.REGIONS.I > 1 row found.
Create table redundancy
and define columns co11
and col2
. Create two user indexes on col1
and col2
. You see an error message when you attempt to create the second index r2
. Index r1
is created. Index r2
is not created.
Command> CREATE TABLE redundancy (col1 CHAR (30), col2 VARCHAR2 (30)); Command> CREATE INDEX r1 ON redundancy (col1, col2); Command> CREATE INDEX r2 ON redundancy (col1, col2); 2231: New index R2 would be identical to existing index R1 The command failed.
Issue the ttIsql
command INDEXES
on table redundancy
to show that only index r1
is created:
Command> INDEXES redundancy; Indexes on table SAMPLEUSER.REDUNDANCY: R1: non-unique range index on columns: COL1 COL2 1 index found. 1 index found on 1 table.
This unique index ensures that all part numbers are unique.
CREATE UNIQUE INDEX purchasing.partnumindex ON purchasing.parts (partnumber);
Create a linguistic index named german_index
on table employees1
. To have more than one linguistic sort, create a second linguistic index.
Command> CREATE TABLE employees1 (id CHARACTER (21), id2 character (21)); Command> CREATE INDEX german_index ON employees1 (NLSSORT(id, 'NLS_SORT=GERMAN')); Command> CREATE INDEX german_index2 ON employees1 (NLSSORT(id2, 'nls_sort=german_ci')); Command> indexes employees1; Indexes on table SAMPLEUSER.EMPLOYEES1: GERMAN_INDEX: non-unique range index on columns: NLSSORT(ID,'NLS_SORT=GERMAN') GERMAN_INDEX2: non-unique range index on columns: NLSSORT(ID2,'nls_sort=german_ci') 2 indexes found. 1 table found.
The CREATE MATERIALIZED VIEW
statement creates a view of the table specified in the SelectQuery
clause. The original tables used to create a view are referred to as detail tables. The view can be refreshed synchronously or asynchronously with regard to changes in the detail tables. If you create an asynchronous materialized view, you must first create a materialized view log on the detail table. See "CREATE MATERIALIZED VIEW LOG".
User executing the statement must have CREATE MATERIALIZED VIEW
(if owner) or CREATE ANY MATERIALIZED VIEW
(if not owner).
Owner of the materialized view must have SELECT
on the detail tables.
Owner of the materialized view must have CREATE TABLE
.
CREATE MATERIALIZED VIEW [Owner.]ViewName [REFRESH { FAST | COMPLETE } | [NEXT SYSDATE[+NUMTODSINTERVAL(IntegerLiteral,IntervalUnit)]] | NEXT SYSDATE[+NUMTODSINTERVAL(IntegerLiteral,IntervalUnit) ] ] AS SelectQuery [PRIMARY KEY (ColumnName [,...])] [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages]
Parameter | Description |
---|---|
[ Owner .] ViewName |
Name assigned to the new view. |
REFRESH |
Specifies an asynchronous materialized view. |
FAST | COMPLETE |
Refresh methods. FAST specifies incremental refresh. COMPLETE specifies full refresh. |
NEXT SYSDATE |
If NEXT SYSDATE is specified without NUMTODSINTERVAL , the materialized view is refreshed incrementally every time a detail table is modified. The refresh occurs in a separate transaction immediately after the transaction that modifies the detail table has been committed. You cannot specify a full refresh (COMPLETE ) every time a detail table is modified.
If If |
[+NUMTODSINTERVAL(IntegerLiteral, IntervalUnit)] |
If specified, the materialized view is refreshed at the specified interval. IntegerLiteral must be an integer. IntervalUnit must be one of the following values: 'DAY' , 'HOUR' , 'MINUTE' , 'SECOND' .
If |
SelectQuery |
Select column from the detail tables to be used in the view. |
ColumnName |
Name of the column(s) that forms the primary key for the view to be created. Up to 16 columns can be specified for the primary key. Each result column name of a viewed table must be unique. The column name definition cannot contain the table or owner component. |
UNIQUE HASH ON |
Hash index for the table. Only unique hash indexes are created. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined. |
HashColumnName |
Column defined in the view that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key. |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for For more information on hash indexes, see "CREATE TABLE". |
This section describes restrictions, requirements, and other considerations for materialized views, covering the following topics:
Restrictions and requirements for materialized views
The following restrictions and requirements on the defining query apply generally to materialized views, whether synchronous or asynchronous. (In some cases there are further restrictions for asynchronous materialized views with FAST
refresh, as noted.)
Each expression in the select list must have a unique name.
Do not use non-materialized views to define a materialized view.
Do not define CLOB
, BLOB
, or NCLOB
data types for columns in the select list of the materialized view query.
The detail tables cannot belong to a cache group and the detail tables cannot have compression.
Do not use SELECT
FOR
UPDATE
.
Do not reference system tables or views.
Do not use nested definitions for a materialized view.
Do not use dynamic parameters.
Do not use ROWNUM
.
Do not use analytic functions.
Do not use GROUPING
SETS
, ROLLUP
, or CUBE
.
Do not use the SYSDATE
function.
Do not use the functions SYSTEM_USER
, USER
, CURRENT_USER
, or SESSION_USER
.
Do not use NEXTVAL
or CURRVAL
.
Outer joins are allowed but the select list must project at least one non-nullable column from each of the inner tables specified in the outer join.
Note: Outer joins are not allowed for asynchronous materialized views with FAST
refresh.
Do not use the WITH <subquery
> clause.
The following restrictions (not on the defining query) apply generally to materialized views, whether synchronous or asynchronous.
Do not have a hash-based primary key that contains any aggregate columns of the materialized view.
A materialized view cannot be replicated directly using TimesTen replication. You can replicate the detail tables. You must define the same materialized view on both sides of replication. TimesTen automatically updates the corresponding materialized views.
You cannot define a foreign key if the referencing or referenced table is a materialized view.
The following restrictions and requirements on the defining query apply to synchronous materialized views and to asynchronous materialized views with FAST
refresh. (In some cases there are further restrictions for asynchronous materialized views with FAST
refresh, as noted.)
The view definition must include all columns in the group by list in the select list.
An aggregate view must include a COUNT (*)
or COUNT
(non-nullable column) in the select list.
Do not use derived tables or JOIN
tables.
Do not use SELECT
DISTINCT
or an aggregate distinct function.
Note: Aggregate functions are not allowed for asynchronous materialized views with FAST
refresh.
Do not use the set operators UNION
, MINUS
, or INTERSECT
.
Do not use SUM
of nullable expressions.
Use only simple columns as group by columns.
Group by columns cannot belong to self join tables.
Do not use these clauses:
HAVING
ORDER
BY
DISTINCT
FIRST
JOIN
Do not use the TT_HASH
function.
You can use SUM
and COUNT
but do not use expressions involving SUM
and COUNT
. Do not use AVG
, which is treated as SUM/COUNT
.
Do not specify MIN
or MAX
functions in the select list.
For joins:
Join predicates cannot have an OR
.
Do not specify Cartesian product joins (joins with no join predicate).
For outer joins, outer join each inner table with at most one table.
Note: Outer joins are not allowed for asynchronous materialized views with FAST
refresh.
The following restrictions on the defining query apply only to asynchronous materialized views with FAST
refresh.
Aggregate functions are not supported
Outer joins are not supported.
The select list must include the ROWID
or the primary key columns for all the detail tables.
The following restrictions (not on the defining query) apply only to asynchronous materialized views with FAST
refresh.
The materialized view log must be created for each detail table in the asynchronous material view with FAST
refresh before creating the asynchronous materialized view.
The materialized view log must include all the columns used in the asynchronous materialized views.
Additional considerations for materialized views
The following additional considerations apply generally to materialized views, whether synchronous or asynchronous.
A materialized view is read-only and cannot be updated directly. A materialized view is updated only when changes are made to the associated detail tables. Therefore a materialized view cannot be the target of a DELETE
, UPDATE
or INSERT
statement.
By default, a range index is created to enforce the primary key for a materialized view. Alternatively, use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries over a materialized view's primary key, then choose a range index for that view by omitting the UNIQUE HASH
clause.
If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause. See "CREATE TABLE" for more information about the UNIQUE HASH
clause.
You can use ALTER TABLE
to change the representation of the primary key index or resize a hash index of a materialized view.
You cannot add or drop columns in the materialized view with the ALTER TABLE
statement. To change the structure of the materialized view, drop and re-create the view.
You can create indexes on the materialized view with the CREATE INDEX
SQL statement.
The following additional considerations apply only to asynchronous materialized views.
If REFRESH
is specified, at least one of the refresh options (FAST
or COMPLETE
) or the refresh interval (NEXT SYSDATE
) must be specified. (If you omit REFRESH
, the materialized view is updated synchronously.)
If you create an asynchronous materialized view with REFRESH FAST
, it is recommended that you update the statistics on the materialized view log, materialized view and the base table on which the materialized view is created to increase the performance for the base table and updates on the materialized view.
For an asynchronous materialized view with FAST
refresh, TimesTen creates a unique index that is refreshed incrementally. The index is created on the primary key or ROWID
of the detail tables included in the select list.
The owner of a materialized view must have the SELECT
privilege on its detail tables. The SELECT
privilege is implied by the SELECT ANY TABLE
and ADMIN
system privileges. When the SELECT
privilege or a higher-level system privilege on the detail tables is revoked from the owner of the materialized view, the materialized view becomes invalid.
You can select from an invalid asynchronous materialized view. Refreshing an invalid asynchronous materialized view fails with an error.
Selecting from an invalid synchronous materialized view fails with an error. Updates to the detail tables of an invalid synchronous materialized view do not update the materialized view.
You can identify invalid materialized views by using the ttIsql describe
command and by inspecting the STATUS
column of the SYS.DBA_OBJECTS
, SYS.ALL_OBJECTS
or SYS.USER_OBJECTS
system tables. See Oracle TimesTen In-Memory Database System Tables and Views Reference.
If the revoked privilege is restored, you can make an invalid materialized view valid again by:
Dropping and then re-creating the materialized view.
Refreshing an invalid asynchronous materialized view if it was originally specified with complete refreshes.
For more information, see "Object privileges for materialized views" in Oracle TimesTen In-Memory Database Operations Guide.
Create a materialized view of columns from the customer
and bookorder
tables.
CREATE MATERIALIZED VIEW custorder AS SELECT custno, custname, ordno, book FROM customer, bookorder WHERE customer.custno=bookorder.custno;
Create a materialized view of columns x1
and y1
from the t1
table.
CREATE MATERIALIZED VIEW v1 AS SELECT x1, y1 FROM t1 PRIMARY KEY (x1) UNIQUE HASH (x1) PAGES=100;
Create a materialized view from an outer join of columns x1
and y1
from the t1
and t2
tables.
CREATE MATERIALIZED VIEW v2 AS SELECT x1, y1 FROM t1, t2 WHERE x1=x2(+);
Create an asynchronous materialized view called empmatview
with incremental refresh. The materialized view will be refreshed immediately after updates to employees
have been committed. The columns in empmatview
are employee_id
and email
. You must create a materialized view log before you create an asynchronous materialized view.
CREATE MATERALIZED VIEW empmatview REFRESH FAST NEXT SYSDATE AS SELECT employee_id, email FROM employees; 107 rows materialized.
Create an asynchronous materialized view called empmatview1
with complete refresh. A full refresh of the materialized view occurs every 10 days. The columns in empmatview
are employee_id
and email
. You must create a materialized view log before you create an asynchronous materialized view.
CREATE MATERIALIZED VIEW empmatview1 REFRESH COMPLETE NEXT SYSDATE+NUMTODSINTERVAL(10,'day') AS SELECT employee_id, email FROM employees; 107 rows materialized.
The following example creates a materialized view empmatview2
based on selected columns employee_id
and email
from table employees
. After the materialized view is created, create an index on the materialized view column mvemp_id
of the materialized view empmatview2
.
CREATE MATERIALIZED VIEW empmatview2 AS SELECT employee_id mvemp_id, email mvemail FROM employees; 107 rows materialized. CREATE INDEX empmvindex ON empmatview2 (mvemp_id);
CREATE MATERIALIZED VIEW LOG
CREATE TABLE
CREATE VIEW
DROP [MATERIALIZED] VIEW
REFRESH MATERIALIZED VIEW
The CREATE MATERIALIZED VIEW LOG
statement creates a log in which changes to the detail table are recorded. The log is required for an asynchronous materialized view that is refreshed incrementally. The log must be created before the materialized view is created. The log is a table in the user's schema called MVLOG$_
detailTableID
, where detailTableID
is a system-generated ID.
This statement also creates other objects for internal use:
A global temporary table called MVLGT$_
detailTableID
A sequence called MVSEQ$_
detailTableID
The objects are dropped when the DROP MATERIALIZED VIEW LOG
statement is executed.
SELECT
on the detail table and
CREATE TABLE
or CREATE ANY TABLE
(if not owner).
CREATE MATERIALIZED VIEW LOG ON [Owner.]TableName [ WITH { PRIMARY KEY[, ROWID] | ROWID[, PRIMARY KEY } [(columnName[,...])] | (columnName[,...]) ]
Parameter | Description |
---|---|
[ Owner .] TableName |
Name of the detail table for the materialized view. |
[( columnName [,...]) |
List of columns for which changes will be recorded in the log. You cannot include the primary key columns in the column list when you specify the PRIMARY KEY option. |
Use the WITH
clause to indicate in the materialized view log which rows have been updated in the detail table. Updated rows can be indicated either by corresponding primary key value or by rowid.
Specify either PRIMARY KEY
or ROWID
in the WITH
clause. However, if the WITH
clause is specified without either option, it defaults to PRIMARY KEY
. In addition, the materialized view log defaults to use PRIMARY KEY
if the WITH
clause is omitted altogether.
Specify PRIMARY KEY
to identify changed rows by the corresponding primary key value. Or specify ROWID
to identify changed rows by rowid. The ROWID
option is useful when the table does not have a primary key or when you do not want to use the primary key when you create the materialized view.
You can specify both PRIMARY KEY
and ROWID
. The materialized view log may be used by more than one asynchronous materialized view using the specified table as the detail table. However, you can only specify one PRIMARY KEY
clause, one ROWID
clause and one column list for a materialized view log.
Only one materialized view log is created for a table, even if the table is the detail table for more than one materialized view with FAST
refreshes. Make sure to include all the columns that are used in different asynchronous materialized views with FAST
refresh.
A materialized view log cannot be created using a materialized view as the table or for tables in cache groups.
A materialized view log cannot be altered to add or drop columns.
Create a materialized view log on the employees
table. Include employee_id
(the primary key) and email
in the log.
CREATE MATERIALIZED VIEW LOG ON employees WITH PRIMARY KEY (email);
You can create the same materialized view log on the employees table without specifying PRIMARY KEY
, which is the default and so is implied, as follows.
CREATE MATERIALIZED VIEW LOG ON employees WITH (email);
To create a materialized view log on the employees table with only the primary key, execute the following.
CREATE MATERIALIZED VIEW LOG ON employees;
Create a materialized view log on the employees
table. Include employee_id
(the primary key) and rowid in the log.
Command> CREATE MATERIALIZED VIEW LOG ON employees WITH primary key, rowid;
Create a materialized view log on the employees
table. Include rowid in the log.
Command> CREATE MATERIALIZED VIEW LOG ON employees WITH rowid;
Create a materialized view log on the employees
table. Include rowid, primary key (employee_id
) and email
in the log.
Command> CREATE MATERIALIZED VIEW LOG ON employees WITH rowid, primary key (email);
Create a materialized view log on the employees
table. Include primary key, by default), and two other columns of email
and last_name
in the log.
Command> CREATE MATERIALIZED VIEW LOG ON employees WITH (email, last_name);
CREATE MATERIALIZED VIEW
DROP MATERIALIZED VIEW LOG
The CREATE PACKAGE
statement creates the specification for a standalone package, which is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. The package specification declares these objects. The package body defines these objects.
CREATE PROCEDURE
(if owner) or CREATE ANY PROCEDURE
(if not owner).
CREATE [OR REPLACE] PACKAGE [Owner.]PackageName [InvokerRightsClause] {IS|AS} PlsqlPackageSpec
The syntax for the InvokerRightsClause
:
AUTHID {CURRENT_USER | DEFINER}
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to re-create the package specification if it already exists. Use this clause to change the specification of an existing package without dropping and recreating the package. When you change a package specification, TimesTen recompiles it. |
PackageName |
Name of the package. |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see the "Definer's rights and invoker's rights" section in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide or the Oracle Database PL/SQL Language Reference. |
IS|AS |
Specify either IS or AS to declare the body of the function. |
PlsqlPackageSpec |
Specifies the package specification. Can include type definitions, cursor declarations, variable declarations, constant declarations, exception declarations and PL/SQL subprogram declarations. |
In a replicated environment, the CREATE PACKAGE
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
When you create or replace a package, the privileges granted on the package remain the same. If you drop and re-create the object, the object privileges that were granted on the original object are revoked.
Oracle Database PL/SQL Language Reference and Oracle Database SQL Language Reference
The CREATE PACKAGE BODY
statement creates the body of a standalone package. A package is an encapsulated collection of related procedures, functions, and other program objects stored together in your database. A package specification declares these objects. A package body defines these objects.
CREATE PROCEDURE
(if owner) or CREATE ANY PROCEDURE
(if not owner).
CREATE [OR REPLACE] PACKAGE BODY [Owner.]PackageBody {IS|AS} plsql_package_body
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to re-create the package body if it already exists. Use this clause to change the body of an existing package without dropping and recreating it. When you change a package body, TimesTen recompiles it. |
PackageBody |
Name of the package body. |
IS|AS |
Specify either IS or AS to declare the body of the function. |
plsql_package_body |
Specifies the package body which consists of PL/SQL subprograms. |
In a replicated environment, the CREATE PACKAGE BODY
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
When you create or replace a package body, the privileges granted on the package body remain the same. If you drop and re-create the object, the object privileges that were granted on the original object are revoked.
The CREATE PROCEDURE
statement creates a standalone stored procedure.
CREATE PROCEDURE
(if owner) or CREATE ANY PROCEDURE
(if not owner).
CREATE [OR REPLACE] PROCEDURE [Owner.]ProcedureName [(arguments [IN|OUT|IN OUT][NOCOPY] DataType [DEFAULT expr][,...])] [InvokerRightsClause] [DETERMINISTIC] {IS|AS} plsql_procedure_body
The syntax for the InvokerRightsClause
:
AUTHID {CURRENT_USER|DEFINER}
You can specify InvokerRightsClause
or DETERMINISTIC
in any order.
Parameter | Description |
---|---|
OR REPLACE |
Specify OR REPLACE to re-create the procedure if it already exists. Use this clause to change the definition of an existing procedure without dropping and recreating it. When you re-create a procedure, TimesTen recompiles it. |
ProcedureName |
Name of procedure. |
arguments |
Name of argument/parameter. You can specify 0 or more parameters for the procedure. If you specify a parameter, you must specify a data type for the parameter. The data type must be a PL/SQL data type.
For more information on PL/SQL data types, see Chapter 3, "PL/SQL Data Types" in the Oracle Database PL/SQL Language Reference. |
[IN| OUT |IN OUT] |
Parameter modes.
|
NOCOPY |
Specify NOCOPY to instruct TimesTen to pass the parameter as fast as possible. Can enhance performance when passing a large value such as a record, an index-by-table, or a varray to an OUT or IN OUT parameter. IN parameters are always passed NOCOPY .
For more information on |
DEFAULT expr |
Use this clause to specify a DEFAULT value for the parameter. You can specify := in place of the keyword DEFAULT . |
InvokerRightsClause |
Lets you specify whether the SQL statements in PL/SQL functions or procedures execute with definer's or invoker's rights. The AUTHID setting affects the name resolution and privilege checking of SQL statements that a PL/SQL procedure or function issues at runtime, as follows:
For more information, see the "Definer's rights and invoker's rights" section in the Oracle TimesTen In-Memory Database PL/SQL Developer's Guide or the Oracle Database PL/SQL Language Reference. |
DETERMINISTIC |
Specify DETERMINISTIC to indicate that the procedure should return the same result value whenever it is called with the same values for its parameters.
For more information on the |
IS|AS |
Specify either IS or AS to declare the body of the procedure. |
plsql_procedure_body |
Specifies the procedure body. |
TimesTen does not support:
call_spec
clause
AS EXTERNAL
clause
In a replicated environment, the CREATE PROCEDURE
statement is not replicated. For more information, see "Creating a new PL/SQL object in an existing active standby pair" and "Adding a PL/SQL object to an existing replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
The namespace for PL/SQL procedures is distinct from the TimesTen built-in procedures. You can create a PL/SQL procedure with the same name as a TimesTen built-in procedure.
When you create or replace a procedure, the privileges granted on the procedure remain the same. If you drop and re-create the object, the object privileges that were granted on the original object are revoked.
Create a procedure query_emp
to retrieve information about an employee. Pass the employee_id
171 to the procedure and retrieve the last_name
and salary
into two OUT
parameters.
Command> CREATE OR REPLACE PROCEDURE query_emp > (p_id IN employees.employee_id%TYPE, > p_name OUT employees.last_name%TYPE, > p_salary OUT employees.salary%TYPE) IS > BEGIN > SELECT last_name, salary INTO p_name, p_salary > FROM employees > WHERE employee_id = p_id; > END query_emp; > / Procedure created.
The CREATE REPLICATION
statement:
Defines a classic replication scheme on a participating database.
Installs the specified configuration in the executing database's replication system tables.
Typically consists of one or more replication element specifications and zero or more STORE
specifications.
TimesTen SQL configuration for replication also provides a programmable way to configure a classic replication scheme. The configuration can be embedded in C, C++ or Java code. Replication can be configured locally or from remote systems using client/server.
In addition, you need to use the ttRepAdmin
utility to maintain operations not covered by the supported SQL statements. Use ttRepAdmin
to change replication state, duplicate databases, list the replication configuration, and view replication status.
ADMIN
A replication element is an entity that TimesTen synchronizes between databases. A replication element can be a whole table or a database. A database can include most types of tables and cache groups. It can include only specified tables and cache groups, or include all tables except specified tables and cache groups. It cannot include temporary tables or views, whether materialized or nonmaterialized.
A replication scheme is a set of replication elements, as well as the databases that maintain copies of these elements.
For more detailed information on SQL configuration for classic replication, see "Defining a classic replication scheme" in the Oracle TimesTen In-Memory Database Replication Guide.
CREATE REPLICATION [Owner.]ReplicationSchemeName { ELEMENT ElementName { DATASTORE | { TABLE [Owner.]TableName [CheckConflicts]} | SEQUENCE [Owner.]SequenceName} { MASTER | PROPAGATOR } FullStoreName [TRANSMIT { NONDURABLE | DURABLE }] { SUBSCRIBER FullStoreName [,...] [ReturnServiceAttribute] } [,...] } [...] [{INCLUDE | EXCLUDE} {TABLE [[Owner.]TableName[,...]] | CACHE GROUP [[Owner.]CacheGroupName[,...]] | SEQUENCE [[Owner.]SequenceName[,...]} [,...]] [ STORE FullStoreName [StoreAttribute [... ]]] [...] [ NetworkOperation[...]]
Syntax for CheckConflicts
is described in "CHECK CONFLICTS".
Syntax for ReturnServiceAttribute
:
{ RETURN RECEIPT [BY REQUEST] | RETURN TWOSAFE [BY REQUEST] | NO RETURN }
Syntax for StoreAttribute
:
DISABLE RETURN {SUBSCRIBER | ALL} NumFailures RETURN SERVICES {ON | OFF} WHEN [REPLICATION] STOPPED DURABLE COMMIT {ON | OFF} RESUME RETURN MilliSeconds LOCAL COMMIT ACTION {NO ACTION | COMMIT} RETURN WAIT TIME Seconds COMPRESS TRAFFIC {ON | OFF} PORT PortNumber TIMEOUT Seconds FAILTHRESHOLD Value CONFLICT REPORTING SUSPEND AT Value CONFLICT REPORTING RESUME AT Value TABLE DEFINITION CHECKING {RELAXED|EXACT}
Syntax for NetworkOperation
:
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName { { MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost } PRIORITY Priority } [...]
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the new classic replication scheme. Classic replication schemes should have names that are unique from all other database objects. |
CheckConflicts |
Check for replication conflicts when simultaneously writing to bidirectionally replicated databases. See "CHECK CONFLICTS". |
COMPRESS TRAFFIC {ON | OFF} |
Compress replicated traffic to reduce the amount of network bandwidth. ON specifies that all replicated traffic for the database defined by STORE be compressed. OFF (the default) specifies no compression. See "Compressing replicated traffic" in Oracle TimesTen In-Memory Database Replication Guide for details. |
CONFLICT REPORTING SUSPEND AT Value |
Suspends conflict resolution reporting.
This clause is valid for table level replication. |
CONFLICT REPORTING RESUME AT Value |
Resumes conflict resolution reporting.
This clause is valid for table level replication. |
DATASTORE |
Define entire database as element. This type of element can only be defined for a master database that is not configured with an element of type TABLE in the same or a different replication scheme. See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide. |
{INCLUDE|EXCLUDE}
|
INCLUDE includes in the DATASTORE element only the tables, sequences or cache groups listed. Use one INCLUDE clause for each object type (table, sequence or cache group).
|
DISABLE RETURN {SUBSCRIBER|ALL} NumFailures |
Set the return service failure policy so that return service blocking is disabled after the number of timeouts specified by NumFailures . Selecting SUBSCRIBER applies this policy only to the subscriber that fails to acknowledge replicated updates within the set timeout period. ALL applies this policy to all subscribers should any of the subscribers fail to respond. This failure policy can be specified for either the RETURN RECEIPT or RETURN TWOSAFE service.
If See "Managing return service timeout errors and replication state changes" in Oracle TimesTen In-Memory Database Replication Guide for details. |
DURABLE COMMIT {ON|OFF} |
Overrides the DurableCommits general connection attribute setting. DURABLE COMMIT ON enables durable commits regardless of whether the replication agent is running or stopped. |
ELEMENT ElementName |
The entity that TimesTen synchronizes between databases. TimesTen supports the entire database (DATASTORE ) and whole tables (TABLE ) as replication elements.
See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
FAILTHRESHOLD Value |
The number of log files that can accumulate for a subscriber database. If this value is exceeded, the subscriber is set to the Failed state.The value 0 means "No Limit." This is the default.
See "Setting the transaction log failure threshold" in Oracle TimesTen In-Memory Database Replication Guide. |
FullStoreName |
The database, specified as one of the following:
For example, if the database path is This is the database file name specified in the
|
LOCAL COMMIT ACTION {NO ACTION|COMMIT} |
Specifies the default action to be taken for a return twosafe transaction in the event of a timeout.
Note: This attribute is only valid when the
This setting can be overridden for specific transactions by calling the |
MASTER FullStoreName |
The database on which applications update the specified element. The MASTER database sends updates to its SUBSCRIBER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
NO RETURN |
Specifies that no return service is to be used. This is the default.
For details on the use of the return services, see "Using a return service" in Oracle TimesTen In-Memory Database Replication Guide. |
PORT PortNumber |
The TCP/IP port number on which the replication agent for the database listens for connections. If not specified, the replication agent automatically allocates a port number. |
PROPAGATOR FullStoreName |
The database that receives replicated updates and passes them on to other databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
RESUME RETURN MilliSeconds |
If return service blocking has been disabled by DISABLE RETURN , this attribute sets the policy on when to re-enable return service blocking. Return service blocking is re-enabled as soon as the failed subscriber acknowledges the replicated update in a period of time that is less than the specified MilliSeconds .
If |
RETURN RECEIPT [BY REQUEST] |
Enables the return receipt service, so that applications that commit a transaction to a master database are blocked until the transaction is received by all subscribers.
|
RETURN SERVICES {ON|OFF} WHEN [REPLICATION] STOPPED |
Sets return services on or off when replication is disabled (stopped or paused state).
See "Managing return service timeout errors and replication state changes" in Oracle TimesTen In-Memory Database Replication Guide for details. |
RETURN TWOSAFE [BY REQUEST] |
Enables the return twosafe service, so that applications that commit a transaction to a master database are blocked until the transaction is committed on all subscribers.
Note: This service can only be used in a bidirectional replication scheme where the elements are defined as Specifying |
RETURN WAIT TIME Seconds |
Specifies the number of seconds to wait for return service acknowledgment. The default value is 10 seconds. A value of '0' means that there is no timeout. Your application can override this timeout setting by calling the returnWait parameter in the ttRepSyncSet procedure. |
SEQUENCE [ Owner .] SequenceName |
Define the sequence specified by [ Owner .] SequenceName as element. See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
STORE FullStoreName |
Defines the attributes for a given database. Attributes include PORT , TIMEOUT and FAILTHRESHOLD . The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
SUBSCRIBER FullStoreName |
A database that receives updates from the MASTER databases. The FullStoreName must be the database specified in the DataStore attribute of the DSN description. |
TABLE [ Owner .] TableName |
Define the table specified by [ Owner .] TableName as element. See "Defining replication elements" in Oracle TimesTen In-Memory Database Replication Guide for details. |
TIMEOUT Seconds |
The maximum number of seconds the replication agent waits for a response from the database. Default: 120 seconds. |
TRANSMIT {DURABLE | NONDURABLE} |
Specifies whether to flush the master log to disk before sending a batch of committed transactions to the subscribers.
Note: Note: See "Setting transmit durability on DATASTORE element" and "Replicating the entire master database with TRANSMIT NONDURABLE" in Oracle TimesTen In-Memory Database Replication Guide for more information. |
TABLE DEFINITION CHECKING {EXACT|RELAXED} |
Specifies type of table definition checking that occurs on the subscriber:
The default is |
ROUTE MASTER FullStoreName SUBSCRIBER FullStoreName |
Denotes the NetworkOperation clause. If specified, enables you to control the network interface that a master store uses for every outbound connection to each of its subscriber stores.
Can be specified more than once. For |
MASTERIP MasterHost | SUBSCRIBERIP SubscriberHost |
MasterHost and SubscriberHost are the IP addresses for the network interface on the master and subscriber stores. Specify in dot notation or canonical format or in colon notation for IPV6.
Clause can be specified more than once. |
PRIORITY Priority |
Variable expressed as an integer from 1 to 99. Denotes the priority of the IP address. Lower integral values have higher priority. An error is returned if multiple addresses with the same priority are specified. Controls the order in which multiple IP addresses are used to establish peer connections.
Required syntax of |
The syntax for CHECK CONFLICTS
is:
{NO CHECK | CHECK CONFLICTS BY ROW TIMESTAMP COLUMN ColumnName [ UPDATE BY { SYSTEM | USER } ] [ ON EXCEPTION { ROLLBACK [ WORK ] | NO ACTION } ] [ {REPORT TO 'FileName' [ FORMAT { XML | STANDARD } ] | NO REPORT } ] }
Note:
ACHECK CONFLICT
clause can only be used for elements of type TABLE
.The CHECK CONFLICTS
clause of the CREATE REPLICATION
or ALTER REPLICATION
statement has the following parameters:
Parameter | Description |
---|---|
CHECK CONFLICTS BY ROW TIMESTAMP |
Indicates that all update and uniqueness conflicts are to be detected. Conflicts are resolved in the manner specified by the ON EXCEPTION parameter.
It also detects delete conflicts with |
COLUMN ColumnName |
Indicates the column in the replicated table to be used for timestamp comparison. The table is specified in the ELEMENT description by TableName .
|
NO CHECK |
Specify to suppress conflict resolution for a given element. |
UPDATE BY {SYSTEM | USER} |
Specifies whether the timestamp values are maintained by TimesTen (SYSTEM ) or the application (USER ). The replicated table in the master and subscriber databases must use the same UPDATE BY specification. See "Enabling system timestamp column maintenance" and "Enabling user timestamp column maintenance" in Oracle TimesTen In-Memory Database Replication Guide for more information. The default is UPDATE BY SYSTEM . |
ON EXCEPTION {ROLLBACK[WORK |NO ACTION} |
Specifies how to resolve a detected conflict. ROW TIMESTAMP conflict detection has the resolution options:
Default is |
REPORT TO ' FileName ' |
Specifies the file to log updates that fail the timestamp comparison. FileName is a SQL character string that cannot exceed 1,000 characters. (SQL character string literals are single-quoted strings that may contain any sequence of characters, including spaces.) The same file can be used to log failed updates for multiple tables. |
[FORMAT {XML|STANDARD}] |
Optionally specifies the conflict report format for an element. The default format is STANDARD . |
NO REPORT |
Specify to suppress logging of failed timestamp comparisons. |
The names of all databases on the same host must be unique for each classic replication scheme for each TimesTen instance.
Replication elements can only be updated (by normal application transactions) through the MASTER
database. PROPAGATOR
and SUBSCRIBER
databases are read-only.
If you define a classic replication scheme that permits multiple databases to update the same table, see "Resolving Replication Conflicts" in Oracle TimesTen In-Memory Database Replication Guide for recommendations on how to avoid conflicts when updating rows.
SELF
is intended for classic replication schemes where all participating databases are local. Do not use SELF
for a distributed classic replication scheme in a production environment, where spelling out the host name for each database in a script enables it to be used at each participating database.
Each attribute for a given STORE
may be specified only once, or not at all.
Specifying the PORT
of a database for one classic replication scheme specifies it for all classic replication schemes. All other connection attributes are specific to the classic replication scheme specified in the command.
For replication schemes, DataStoreName
is always the prefix of the TimesTen database checkpoint file names. These are the files with the.ds0
and.ds1
suffixes that are saved on disk by checkpoint operations.
If a row with a default NOT INLINE VARCHAR
value is replicated, the receiver creates a copy of this value for each row instead of pointing to the default value if and only if the default value of the receiving node is different from the sending node.
To use timestamp comparison on replicated tables, you must specify a nullable column of type BINARY(8)
to hold the timestamp value. Define the timestamp column when you create the table. You cannot add the timestamp column with the ALTER TABLE
statement. In addition, the timestamp column cannot be part of a primary key or index.
If you specify the XML report format, two XML documents are generated:
FileName
.xml
: This file contains the DTD for the report and the root node for the report. It includes the document definition and the include directive.
FileName
.include
: This file is included in FileName
.xml
and contains all the actual conflicts.
The FileName
.include
file can be truncated. Do not truncate the FileName
.xml
file.
For a complete description of the XML format, including examples of each conflict, see "Reporting conflicts to an XML file" in Oracle TimesTen In-Memory Database Replication Guide.
If you specify a report format for an element and then drop the element, the corresponding report files are not deleted.
Use the CONFLICT REPORTING SUSPEND AT
clause to specify a high water mark threshold at which the reporting of conflict resolution is suspended. When the number of conflicts per second exceeds the specified high water mark threshold, conflict resolution reporting (if configured and reported by the report file) and SNMP are suspended and an SNMP trap is emitted to indicate that it has been suspended.
Use the CONFLICT REPORTING RESUME AT
clause to specify a low water mark threshold where the reporting of conflict resolution is resumed. When the rate of conflict falls below the low water mark threshold, conflict resolution reporting is resumed. A SNMP trap is emitted to indicate the resumption of conflict resolution. This trap provides the number of unreported conflicts during the time when conflict resolution was suspended.
The state of whether conflict reporting is suspended or not by a replication agent does not persist across the local replication agent and the peer agent stop and restart.
Do not use the CREATE REPLICATION
statement to replicate cache groups. Only active standby pairs can replicate cache groups. See the CREATE ACTIVE STANDBY PAIR
statement.
Replicate the contents of repl.tab
from masterds
to two subscribers, subscriber1ds
and subscriber2ds
.
CREATE REPLICATION repl.twosubscribers ELEMENT e TABLE repl.tab MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2", subscriber2ds ON "server3";
Replicate the entire masterds
database to the subscriber, subscriber1ds
. The FAILTHRESHOLD
specifies that a maximum of 10 log files can accumulate on masterds
before it decides that subscriber1ds
has failed.
CREATE REPLICATION repl.wholestore ELEMENT e DATASTORE MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2" STORE masterds FAILTHRESHOLD 10;
Bidirectionally replicate the entire westds
and eastds
databases and enable the RETURN TWOSAFE
service.
CREATE REPLICATION repl.biwholestore ELEMENT e1 DATASTORE MASTER westds ON "westcoast" SUBSCRIBER eastds ON "eastcoast" RETURN TWOSAFE ELEMENT e2 DATASTORE MASTER eastds ON "eastcoast" SUBSCRIBER westds ON "westcoast" RETURN TWOSAFE;
Enable the return receipt service for select transaction updates to the subscriber1ds
subscriber.
CREATE REPLICATION repl.twosubscribers ELEMENT e TABLE repl.tab MASTER masterds ON "server1" SUBSCRIBER subscriber1ds ON "server2" RETURN RECEIPT BY REQUEST SUBSCRIBER subscriber2ds ON "server3";
Replicate the contents of the customerswest
table from the west
database to the ROUNDUP
database and the customerseast
table from the east
database. Enable the return receipt service for all transactions.
CREATE REPLICATION r ELEMENT west TABLE customerswest MASTER west ON "serverwest" SUBSCRIBER roundup ON "serverroundup" RETURN RECEIPT ELEMENT east TABLE customerseast MASTER east ON "servereast" SUBSCRIBER roundup ON "serverroundup" RETURN RECEIPT;
Replicate the contents of the repl.tab
table from the centralds
database to the propds
database, which propagates the changes to the backup1ds
and backup2ds
databases.
CREATE REPLICATION repl.propagator ELEMENT a TABLE repl.tab MASTER centralds ON "finance" SUBSCRIBER proprds ON "nethandler" ELEMENT b TABLE repl.tab PROPAGATOR proprds ON "nethandler" SUBSCRIBER backup1ds ON "backupsystem1" bakcup2ds ON "backupsystem2";
Bidirectionally replicate the contents of the repl.accounts
table between the eastds
and westds
databases. Each database is both a master and a subscriber for the repl.accounts
table.
Because the repl.accounts
table can be updated on either the eastds
or westds
database, it includes a timestamp column (tstamp
). The CHECK CONFLICTS
clause establishes automatic timestamp comparison to detect any update conflicts between the two databases. In the event of a comparison failure, the entire transaction that includes an update with the older timestamp is rolled back (discarded).
CREATE REPLICATION repl.r1 ELEMENT elem_accounts_1 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp UPDATE BY SYSTEM ON EXCEPTION ROLLBACK MASTER westds ON "westcoast" SUBSCRIBER eastds ON "eastcoast" ELEMENT elem_accounts_2 TABLE repl.accounts CHECK CONFLICTS BY ROW TIMESTAMP COLUMN tstamp UPDATE BY SYSTEM ON EXCEPTION ROLLBACK MASTER eastds ON "eastcoast" SUBSCRIBER westds ON "westcoast";
Replicate the contents of the repl.accounts
table from the activeds
database to the backupds
database, using the return twosafe service, and using TCP/IP port 40000 on activeds
and TCP/IP port 40001 on backupds
. The transactions on activeds
need to be committed whenever possible, so configure replication so that the transaction is committed even after a replication timeout using LOCAL COMMIT
ACTION
, and so that the return twosafe service is disabled when replication is stopped. To avoid significant delays in the application if the connection to the backupds
database is interrupted, configure the return service to be disabled after five transactions have timed out, but also configure the return service to be re-enabled when the backupds
database's replication agent responds in under 100 milliseconds. Finally, the bandwidth between databases is limited, so configure replication to compress the data when it is replicated from the activeds
database.
CREATE REPLICATION repl.r ELEMENT elem_accounts_1 TABLE repl.accounts MASTER activeds ON "active" SUBSCRIBER backupds ON "backup" RETURN TWOSAFE ELEMENT elem_accounts_2 TABLE repl.accounts MASTER activeds ON "active" SUBSCRIBER backupds ON "backup" RETURN TWOSAFE STORE activeds ON "active" PORT 40000 LOCAL COMMIT ACTION COMMIT RETURN SERVICES OFF WHEN REPLICATION STOPPED DISABLE RETURN SUBSCRIBER 5 RESUME RETURN 100 COMPRESS TRAFFIC ON STORE backupds ON "backup" PORT 40001;
Illustrate conflict reporting suspend and conflict reporting resume clauses for table level replication. Use these clauses for table level replication not database replication. Issue repschemes
command to show that replication scheme is created.
Command> CREATE TABLE repl.accounts (tstamp BINARY (8) NOT NULL PRIMARY KEY, tstamp1 BINARY (8)); Command> CREATE REPLICATION repl.r2 > ELEMENT elem_accounts_1 TABLE repl.accounts > CHECK CONFLICTS BY ROW TIMESTAMP > COLUMN tstamp1 > UPDATE BY SYSTEM > ON EXCEPTION ROLLBACK WORK > MASTER westds ON "west1" > SUBSCRIBER eastds ON "east1" > ELEMENT elem_accounts_2 TABLE repl.accounts > CHECK CONFLICTS BY ROW TIMESTAMP > COLUMN tstamp1 > UPDATE BY SYSTEM > ON EXCEPTION ROLLBACK WORK > MASTER eastds ON "east1" > SUBSCRIBER westds ON "west1" > STORE westds > CONFLICT REPORTING SUSPEND AT 20 > CONFLICT REPORTING RESUME AT 10; Command> REPSCHEMES; Replication Scheme REPL.R2: Element: ELEM_ACCOUNTS_1 Type: Table REPL.ACCOUNTS Conflict Check Column: TSTAMP1 Conflict Exception Action: Rollback Work Conflict Timestamp Update: System Conflict Report File: (none) Master Store: WESTDS on WEST1 Transmit Durable Subscriber Store: EASTDS on EAST1 Element: ELEM_ACCOUNTS_2 Type: Table REPL.ACCOUNTS Conflict Check Column: TSTAMP1 Conflict Exception Action: Rollback Work Conflict Timestamp Update: System Conflict Report File: (none) Master Store: EASTDS on EAST1 Transmit Durable Subscriber Store: WESTDS on WEST1 Store: EASTDS on EAST1 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Store: WESTDS on WEST1 Port: (auto) Log Fail Threshold: (none) Retry Timeout: 120 seconds Compress Traffic: Disabled Conflict Reporting Suspend: 20 Conflict Reporting Resume: 10 1 replication scheme found.
Example of NetworkOperation
clause with 2 MASTERIP
and SUBSCRIBERIP
clauses:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2 RETURN RECEIPT MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4" PRIORITY 2;
Example of NetworkOperation
clause. Use the default sending interface but a specific receiving network:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" SUBSCRIBERIP "rep2nic2" PRIORITY 1;
Example of using the NetworkOperation
clause with multiple subscribers:
CREATE REPLICATION r ELEMENT e DATASTORE MASTER rep1 SUBSCRIBER rep2,rep3 ROUTE MASTER rep1 ON "machine1" SUBSCRIBER rep2 ON "machine2" MASTERIP "1.1.1.1" PRIORITY 1 SUBSCRIBERIP "2.2.2.2" PRIORITY 1 ROUTE MASTER Rep1 ON "machine1" SUBSCRIBER Rep3 ON "machine2" MASTERIP "3.3.3.3" PRIORITY 2 SUBSCRIBERIP "4.4.4.4";
ALTER ACTIVE STANDBY PAIR
ALTER REPLICATION
CREATE ACTIVE STANDBY PAIR
DROP ACTIVE STANDBY PAIR
DROP REPLICATION
The CREATE SEQUENCE
statement creates a new sequence number generator that can subsequently be used by multiple users to generate unique integers. Use the CREATE SEQUENCE
statement to define the initial value of the sequence, define the increment value, the maximum or minimum value and determine if the sequence continues to generate numbers after the minimum or maximum is reached.
CREATE SEQUENCE
(if owner) or CREATE ANY SEQUENCE
(if not owner).
CREATE SEQUENCE [Owner.]SequenceName [INCREMENT BY IncrementValue] [MINVALUE MinimumValue] [MAXVALUE MaximumValue] [CYCLE] [CACHE CacheValue] [START WITH StartValue]
All parameters in the CREATE SEQUENCE
statement must be integer values.
If you do not specify a value in the parameters, TimesTen defaults to an ascending sequence that starts with 1, increments by 1, has the default maximum value and does not cycle.
There is no ALTER SEQUENCE
statement in TimesTen. To alter a sequence, use the DROP SEQUENCE
statement and then create a new sequence with the same name. For example, to change the MINVALUE
, drop the sequence and re-create it with the same name and with the desired MINVALUE
.
Do not create a sequence with the same name as a view or materialized view.
Sequences with the CYCLE
attribute cannot be replicated.
Incrementing SEQUENCE values with CURRVAL and NEXTVAL
To refer to the SEQUENCE
values in a SQL statement, use CURRVAL
and NEXTVAL
.
CURRVAL
returns the value of the last call to NEXTVAL
if there is one in the current session, otherwise it returns an error.
NEXTVAL
increments the current sequence value by the specified increment and returns the value for each row accessed.
The current value of a sequence is a connection-specific value. If there are two concurrent connections to the same database, each connection has its own CURRVAL
of the same sequence set to its last NEXTVAL
reference. When the maximum value is reached, SEQUENCE
either wraps or issues an error statement, depending on the value of the CYCLE
option of the CREATE SEQUENCE
. In the case of recovery, sequences are not rolled back. It is possible that the range of values of a sequence can have gaps; however, each sequence value is still unique.
If you execute a single SQL statement with multiple NEXTVAL
references, TimesTen only increments the sequence once, returning the same value for all occurrences of NEXTVAL
. If a SQL statement contains both NEXTVAL
and CURRVAL
, NEXTVAL
is executed first. CURRVAL
and NEXTVAL
have the same value in that SQL statement.
Note:
NEXTVAL
cannot be used in a query on a standby node of an active standby pair.NEXTVAL
and CURRVAL
can be used in:
The SelectList
of a SELECT
statement, but not the SelectList
of a subquery
The SelectList
of an INSERT...SELECT
statement
The SET
clause of an UPDATE
statement
Create a sequence.
CREATE SEQUENCE mysequence INCREMENT BY 1 MINVALUE 2 MAXVALUE 1000;
This example assumes that tab1
has 1 row in the table and that CYCLE
is used:
CREATE SEQUENCE s1 MINVALUE 2 MAXVALUE 4 CYCLE; SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 2; */ SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 3; */ SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 4; */
After the maximum value is reached, the cycle starts from the minimum value for an ascending sequence.
SELECT s1.NEXTVAL FROM tab1; /* Returns the value of 2; */
To create a sequence and generate a sequence number:
CREATE SEQUENCE seq INCREMENT BY 1; INSERT INTO student VALUES (seq.NEXTVAL, 'Sally');
To use a sequence in an UPDATE SET
clause:
UPDATE student SET studentno = seq.NEXTVAL WHERE name = 'Sally';
To use a sequence in a query:
SELECT seq.CURRVAL FROM student;
The CREATE SYNONYM
statement creates a public or private synonym for a database object. A synonym is an alias for a database object. The object can be a table, view, synonym, sequence, PL/SQL stored procedure, PL/SQL function, PL/SQL package, materialized view or cache group.
A private synonym is owned by a specific user and exists in that user's schema. A private synonym is accessible to users other than the owner only if those users have appropriate privileges on the underlying object and specify the schema along with the synonym name.
A public synonym is accessible to all users as long as the user has appropriate privileges on the underlying object.
CREATE SYNONYM
is a DDL statement.
Synonyms can be used in these SQL statements:
DML statements: SELECT
, DELETE
, INSERT
, UPDATE
, MERGE
Some DDL statements: GRANT
, REVOKE
, CREATE TABLE ... AS SELECT
, CREATE VIEW ... AS SELECT
, CREATE INDEX
, DROP INDEX
Some cache group statements: LOAD CACHE GROUP
, UNLOAD CACHE GROUP
, REFRESH CACHE GROUP
, FLUSH CACHE GROUP
CREATE SYNONYM
(if owner) or CREATE ANY SYNONYM
(if not owner) to create a private synonym.
CREATE PUBLIC SYNONYM
to create a public synonym.
CREATE [OR REPLACE] [PUBLIC] SYNONYM [Owner1.]synonym FOR [Owner2.]object
Parameter | Description |
---|---|
[OR REPLACE] |
Specify OR REPLACE to re-create the synonym if it already exists. Use this clause to change the definition of an existing synonym without first dropping it. |
[PUBLIC] |
Specify PUBLIC to create a public synonym. Public synonyms are accessible to all users, but each user must have appropriate privileges on the underlying object in order to use the synonym.
When resolving references to an object, TimesTen uses a public synonym only if the object is not prefaced by a schema name. |
[ Owner1 .] synonym |
Specify the owner of the synonym. You cannot specify an owner if you have specified PUBLIC . If you omit both PUBLIC and Owner1 , TimesTen creates the synonym in your own schema.
Specify the name for the synonym, which is limited to 30 bytes. |
[ Owner2 .] object |
Specify the owner in which the object resides. Specify the object name for which you are creating a synonym. If you do not qualify object with Owner2 , the object is in your own schema. The Owner2 and object do not need to exist when the synonym is created. |
The schema object does not need to exist when its synonym is created.
Do not create a public synonym with the same name as a TimesTen built-in procedure.
In order to use the synonym, appropriate privileges must be granted to a user for the object aliased by the synonym before using the synonym.
A private synonym cannot have the same name as tables, views, sequences, PLSQL packages, functions, procedures, and cache groups that are in the same schema as the private synonym.
A public synonym may have the same name as a private synonym or an object name.
If the PassThrough
attribute is set so that a query needs to executed in the Oracle database, the query is sent to the Oracle database without any changes. If the query uses a synonym for a table in a cache group, then a synonym with the same name must be defined for the corresponding Oracle database table for the query to be successful.
When an object name is used in the DML and DDL statements in which a synonym can be used, the object name is resolved as follows:
Search for a match within the current schema. If no match is found, then:
Search for a match with a public synonym name. If no match is found, then:
Search for a match in the SYS schema. If no match is found, then:
The object does not exist.
TimesTen creates a public synonym for some objects in the SYS schema. The name of the public synonym is the same as the object name. Thus steps 2 and 3 in the object name resolution can be switched without changing the results of the search.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
=2 when you execute the CREATE SYNONYM
on the active database, the synonym will be replicated to all databases in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
As user ttuser
, create a synonym for the jobs
table. Verify that you can retrieve the information using the synonym. Display the contents of the SYS.USER_SYNONYMS
system view.
Command> CREATE SYNONYM synjobs FOR jobs; Synonym created. Command> SELECT FIRST 2 * FROM jobs; < AC_ACCOUNT, Public Accountant, 4200, 9000 > < AC_MGR, Accounting Manager, 8200, 16000 > 2 rows found. Command> SELECT FIRST 2 * FROM synjobs; < AC_ACCOUNT, Public Accountant, 4200, 9000 > < AC_MGR, Accounting Manager, 8200, 16000 > 2 rows found. Command> SELECT * FROM sys.user_synonyms; < SYNJOBS, TTUSER, JOBS, <NULL> > 1 row found.
Create a public synonym for the employees
table.
Command> CREATE PUBLIC SYNONYM pubemp FOR employees; Synonym created.
Verify that pubemp
is listed as a public synonym in the SYS.ALL_SYNONYMS
system view.
Command> SELECT * FROM sys.all_synonyms; < PUBLIC, TABLES, SYS, TABLES, <NULL> > ... < TTUSER, SYNJOBS, TTUSER, JOBS, <NULL> > < PUBLIC, PUBEMP, TTUSER, EMPLOYEES, <NULL> > 57 rows found.
Create a synonym for the tab
table in the terry
schema. Describe the synonym.
Command> CREATE SYNONYM syntab FOR terry.tab; Synonym created. Command> DESCRIBE syntab; Synonym TTUSER.SYNTAB: For Table TERRY.TAB Columns: COL1 VARCHAR2 (10) INLINE COL2 VARCHAR2 (10) INLINE 1 Synonyms found.
Redefine the synjobs
synonym to be an alias for the employees
table by using the OR REPLACE
clause. Describe synjobs
.
Command> CREATE OR REPLACE synjobs FOR employees; Synonym created. Command> DESCRIBE synjobs; Synonym TTUSER.SYNJOBS: For Table TTUSER.EMPLOYEES Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE UNIQUE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) 1 Synonyms found.
The CREATE TABLE
statement defines a table.
CREATE TABLE
(if owner) or CREATE ANY TABLE
(if not owner).
The owner of the created table must have the REFERENCES
privilege on tables referenced by the REFERENCE
clause.
ADMIN
privilege if replicating a new table across an active standby pair when DDL_REPLICATION_LEVEL
=2 and DDL_REPLICATION_ACTION
=INCLUDE
. These attributes cause the CREATE TABLE
to implicitly execute an ALTER ACTIVE STANDBY PAIR
... INCLUDE TABLE
statement. See "ALTER SESSION" for more details.
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
The syntax for a persistent table
CREATE TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [TableCompression] [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [AGING {LRU| USE ColumnName LIFETIME Num1 {SECOND[S] | MINUTE[S] | HOUR[S] |DAY[S]} [CYCLE Num2 {SECOND[S] | MINUTE[S] |HOUR[S] |DAY[S]}] }[ON|OFF] ] [AS SelectQuery]
The syntax for a global temporary table is:
CREATE GLOBAL TEMPORARY TABLE [Owner.]TableName ( {{ColumnDefinition} [,...] [PRIMARY KEY (ColumnName [,...]) | [[CONSTRAINT ForeignKeyName] FOREIGN KEY ([ColumnName] [,...]) REFERENCES RefTableName [(ColumnName [,...])] [ON DELETE CASCADE]] [...] } ) [UNIQUE HASH ON (HashColumnName [,...]) PAGES = PrimaryPages] [ON COMMIT { DELETE | PRESERVE } ROWS ]
Parameter | Description |
---|---|
[ Owner .] TableName |
Name to be assigned to the new table. Two tables cannot have the same owner name and table name.
If you do not specify the owner name, your login name becomes the owner name for the new table. Owners of tables in TimesTen are determined by the user ID settings or login names. Oracle Database table owner names must always match TimesTen table owner names. For rules on creating names, see "Basic names". |
GLOBAL TEMPORARY |
Specifies that the table being created is a global temporary table. A temporary table is similar to a persistent table but it is effectively materialized only when referenced in a connection.
A global temporary table definition is persistent and is visible to all connections, but the table instance is local to each connection. It is created when a command referencing the table is compiled for a connection and dropped when the connection is disconnected. All instances of the same temporary table have the same name but they are identified by an additional connection ID together with the table name. Global temporary tables are allocated in temp space. The contents of a global temporary table cannot be shared between connections. Each connection sees only its own content of the table and compiled commands that reference temporary tables are not shared among connections. When Temporary tables are automatically excluded from active standby pairs or when the A cache group table cannot be defined as a temporary table. Changes to temporary tables cannot be tracked with XLA. Operations on temporary tables do generate log records. The amount of log they generate is less than for permanent tables. Truncate table is not supported with global temporary tables. Local temporary tables are not supported. No object privileges are needed to access global temporary tables. Do not specify the |
ColumnDefinition |
An individual column in a table. Each table must have at least one column. See "Column Definition".
If you specify the |
ColumnName |
Names of the columns that form the primary key for the table to be created. Up to 16 columns can be specified for the primary key. For a foreign key, the ColumnName is optional. If not specified for a foreign key, the reference is to the parent table's primary key.
If you specify the |
PRIMARY KEY |
PRIMARY KEY may only be specified once in a table definition. It provides a way of identifying one or more columns that, together, form the primary key of the table. The contents of the primary key have to be unique and NOT NULL . You cannot specify a column as both UNIQUE and a single column PRIMARY KEY . |
CONSTRAINT ForeignKeyName |
Specifies an optional user-defined name for a foreign key. If not provided by the user, the system provides a default name. |
FOREIGN KEY |
This specifies a foreign key constraint between the new table and the referenced table identified by RefTableName . There are two lists of columns specified in the foreign key constraint.
Columns in the first list are columns of the new table and are called the referencing columns. Columns in the second list are columns of the referenced table and are called referenced columns. These two lists must match in data type, including length, precision and scale. The referenced table must already have a primary key or unique index on the referenced column. The column name list of referenced columns is optional. If omitted, the primary index of The declaration of a foreign key creates a range index on the referencing columns. The user cannot drop the referenced table or its referenced index until the referencing table is dropped. The foreign key constraint asserts that each row in the new table must match a row in the referenced table such that the contents of the referencing columns are equal to the contents of the referenced columns. Any TimesTen supports SQL-92 A foreign key can be defined on a global temporary table, but it can only reference a global temporary table. If a parent table is defined with A foreign key cannot reference an active parent table. An active parent table is one that has some instance materialized for a connection. If you specify the |
[ON DELETE CASCADE] |
Enables the ON DELETE CASCADE referential action. If specified, when rows containing referenced key values are deleted from a parent table, rows in child tables with dependent foreign key values are also deleted. |
TableCompression |
Defines compression at the column level, which stores data more efficiently. Eliminates redundant storage of duplicate values within columns and improves the performance of SQL queries that perform full table scans. See "In-memory columnar compression of tables" for details. |
UNIQUE |
UNIQUE provides a way of identifying a column where each row must contain a unique value. |
UNIQUE HASH ON |
Hash index for the table. This parameter is used for equality predicates. UNIQUE HASH ON requires that a primary key be defined. |
HashColumnName |
Column defined in the table that is to participate in the hash key of this table. The columns specified in the hash index must be identical to the columns in the primary key.
If you specify the |
PAGES = PrimaryPages |
Sizes the hash index to reflect the expected number of pages in your table. To determine the value for PrimaryPages , divide the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for PrimaryPages (256000/256=1000).
The value for If your estimate for |
[ON COMMIT {DELETE|PRESERVE} ROWS] |
The optional statement specifies whether to delete or preserve rows when a transaction that touches a global temporary table is committed. If not specified, the rows of the temporary table are deleted. |
[AGING LRU [ON|OFF]] |
If specified, defines the LRU aging policy for the table. The LRU aging policy defines the type of aging (least recently used (LRU)), the aging state (ON or OFF ) and the LRU aging attributes.
Set the aging state to either LRU attributes are defined by calling the For more information about LRU aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
[AGING USE ColumnName ... [ON|OFF]] |
If specified, defines the time-based aging policy for the table. The time-based aging policy defines the type of aging (time-based), the aging state (ON or OFF ) and the time-based aging attributes.
Set the aging state to either Time-based aging attributes are defined at the SQL level and are specified by the Specify The values of the column that you use for aging are updated by your applications. If the value of this column is unknown for some rows, and you do not want the rows to be aged, define the column with a large default value (the column cannot be You can define your aging column with a data type of If you specify the AS For more information about time-based aging, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide. |
LIFETIME Num1 {SECOND[S]| MINUTE[S]|HOUR[S]| DAY[S]} |
LIFETIME is a time-based aging attribute and is a required clause.
Specify the The Specify The concept of time resolution is supported. If |
[CYCLE Num2 {SECOND[S] |MINUTE[S]|HOUR[S]| DAY[S]}] |
CYCLE is a time-based aging attribute and is optional. Specify the CYCLE clause after the LIFETIME clause.
The Specify If you do not specify the If the aging state is |
AS SelectQuery |
If specified, creates a new table from the contents of the result set of the SelectQuery . The rows returned by SelectQuery are inserted into the table.
Data types and data type lengths are derived from
You can specify a statement level optimizer hint after the |
You can only use the keyword, ENABLE
, when defining columns in the CREATE
TABLE
statement.
For all data types other than LOBs, the syntax is as follows:
ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] INLINE] [PRIMARY KEY | UNIQUE | NULL [UNIQUE] | NOT NULL [ENABLE] [PRIMARY KEY | UNIQUE] ]
For LOB data types, you cannot create a primary key or unique constraint on LOB columns. In addition, LOB data types are stored out of line, so the INLINE
attribute cannot be specified.
For all LOB data types, the syntax is:
ColumnName ColumnDataType [DEFAULT DefaultVal] [[NOT] NULL [ENABLE]] | [[NOT] NULL [ENABLE]] [DEFAULT DefaultVal]
The column definition has the following parameters:
Parameter | Description |
---|---|
ColumnName |
Name to be assigned to one of the columns in the new table. No two columns in the table can be given the same name. A table can have a maximum of 1000 columns.
If you specify the |
ColumnDataType |
Type of data the column can contain. Some data types require that you indicate a length. See Chapter 1, "Data Types" for the data types that can be specified.
If you specify the |
DEFAULT DefaultVal |
Indicates that if a value is not specified for the column in an INSERT statement, the default value DefaultVal is inserted into the column. The default value specified must have a compatible type with the column's data type. A default value can be as long as the data type of the associated column allows. You cannot assign a default value for the ROWID data type or for columns in read-only cache groups. In addition, you cannot use a function within the DEFAULT clause.
Legal data types for
If the default value is one of the users, the column's data type must be either If you specify the |
INLINE| NOT INLINE |
By default, variable-length columns whose declared column length is greater than 128 bytes are stored out of line. Variable-length columns whose declared column length is less than or equal to 128 bytes are stored inline. The default behavior can be overridden during table creation through the use of the INLINE and NOT INLINE keywords.
If you specify the |
NULL |
Indicates that the column can contain NULL values.
If you specify the If you specify |
NOT NULL [ENABLE] |
Indicates that the column cannot contain NULL values. If NOT NULL is specified, any statement that attempts to place a NULL value in the column is rejected.
If you specify the If you specify You can only use the keyword, |
UNIQUE |
A unique constraint placed on the column. No two rows in the table may have the same value for this column. TimesTen creates a unique range index to enforce uniqueness. So a column with a unique constraint can use more memory and time during execution than a column without the constraint. Cannot be used with PRIMARY KEY .
If you specify the |
PRIMARY KEY |
A unique NOT NULL constraint placed on the column. No two rows in the table may have the same value for this column. Cannot be used with UNIQUE .
If you specify the |
All columns participating in the primary key are NOT NULL
.
A PRIMARY KEY
that is specified in the ColumnDefinition
can only be specified for one column.
You cannot specify a PRIMARY
KEY
in both the ColumnDefinition
clause and the PRIMARY
KEY
clause.
For both primary key and foreign key constraints, duplicate column names are not allowed in the constraint column list.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
To change the ON DELETE CASCADE
triggered action, drop then redefine the foreign key constraint.
You cannot create a table that has a foreign key referencing a cached table.
UNIQUE
column constraint and default column values are not supported with materialized views.
Use the ALTER TABLE
statement to change the representation of the primary key index for a table.
If you specify the AS
SelectQuery
clause:
Data types and data type lengths are derived from the SelectQuery
. Do not specify data types on the columns of the table you are creating.
TimesTen defines on columns in the new table NOT NULL
constraints that were explicitly created on the corresponding columns of the selected table if SelectQuery
selects the column rather than an expression containing the column.
NOT NULL
constraints that were implicitly created by TimesTen on columns of the selected table (for example, primary keys) are carried over to the new table. You can override the NOT NULL
constraint on the selected table by defining the new column as NULL
. For example: CREATE TABLE
newtable
(
newcol
NULL) AS SELECT
(
col
) FROM tab;
NOT INLINE
/INLINE
attributes are carried over to the new table.
Unique keys, foreign keys, indexes and column default values are not carried over to the new table.
If all expressions in SelectQuery
are columns, rather than expressions, then you can omit the columns from the table you are creating. In this case, the name of the columns are the same as the columns in SelectQuery
. If the SelectQuery
contains an expression rather than a simple column reference, either specify a column alias or name the column in the CREATE TABLE
statement.
Do not specify foreign keys on the table you are creating.
Do not specify the SELECT FOR UPDATE
clause in SelectQuery
.
The ORDER BY
clause is not supported when you use the AS
SelectQuery
clause.
SelectQuery
cannot contain set operators UNION
, MINUS
, INTERSECT
.
In a replicated environment for an active standby pair, if DDL_REPLICATION_LEVEL
=2 when you execute the CREATE TABLE
on the active database, the table, including global temporary tables, will be replicated to all databases in the replication scheme. Tables are only replicated to TimesTen instances when DDL_REPLICATION_LEVEL=2
.
To include a new table into an active standby pair when the table is created, set DDL_REPLICATION_LEVEL=2
and DDL_REPLICATION_ACTION
to INCLUDE
before executing the CREATE TABLE
statement on the active database. If DDL_REPLICATION_ACTION
is set to EXCLUDE
, the new table is not included in the active standby pair. You must execute the ALTER ACTIVE STANDBY PAIR INCLUDE TABLE
statement to include the table after creation on all databases. In this case, the table must be empty and present on all databases before executing the ALTER ACTIVE STANDBY PAIR INCLUDE TABLE
statement as the table contents will be truncated when this statement is executed.
See "ALTER SESSION" for more information.
By default, a range index is created to enforce the primary key. Use the UNIQUE HASH
clause to specify a hash index for the primary key.
If your application performs range queries using a table's primary key, then choose a range index for that table by omitting the UNIQUE HASH
clause.
If your application performs only exact match lookups on the primary key, then a hash index may offer better response time and throughput. In such a case, specify the UNIQUE HASH
clause.
A hash index is created with a fixed size that remains constant for the life of the table or until the hash index is resized with the ALTER TABLE
statement or when the index is dropped and recreated. A smaller hash index results in more hash collisions. A larger hash index reduces collisions but can waste memory. Hash key comparison is a fast operation, so a small number of hash collisions should not cause a performance problem for TimesTen.
To ensure that your hash index is sized correctly, your application must indicate the expected size of your table with the value of the RowPages
parameter of the SET
PAGES
clause. Compute this value by dividing the number of expected rows in your table by 256. For example, if your table has 256,000 rows, specify 1000 for the value of RowPages (256000/256=1000).
At most 16 columns are allowed in a hash key.
ON DELETE CASCADE
is supported on detail tables of a materialized view. If you have a materialized view defined over a child table, a deletion from the parent table causes cascaded deletes in the child table. This, in turn, triggers changes in the materialized view.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
For ON DELETE CASCADE
: Since different paths may lead from a parent table to a child table, the following rule is enforced:
Either all paths from a parent table to a child table are "delete" paths or all paths from a parent table to a child table are "do not delete" paths. Specify ON DELETE CASCADE
on all child tables on the "delete" path.
This rule does not apply to paths from one parent to different children or from different parents to the same child.
For ON DELETE CASCADE
, the following rule is also enforced.
If a table is reached by a "delete" path, then all its children are also reached by a "delete" path.
For ON DELETE CASCADE
with replication, the following restrictions apply:
The foreign keys specified with ON DELETE CASCADE
must match between the Master and subscriber for replicated tables. Checking is done at runtime. If there is an error, the receiver thread stops working.
All tables in the delete cascade tree have to be replicated if any table in the tree is replicated. This restriction is checked when the replication scheme is created or when a foreign key with ON DELETE CASCADE
is added to one of the replication tables. If an error is found, the operation is aborted. You may be required to drop the replication scheme first before trying to change the foreign key constraint.
You must stop the replication agent before adding or dropping a foreign key on a replicated table.
The data in a global temporary is private to the current connection and does not need to be secured between users. Thus global temporary tables do not require object privileges.
After you have defined an aging policy for the table, you cannot change the policy from LRU to time-based or from time-based to LRU. You must first drop aging and then alter the table to add a new aging policy.
The aging policy must be defined to change the aging state.
For the time-based aging policy, you cannot add or modify the aging column. This is because you cannot add or modify a NOT NULL
column.
LRU and time-based aging can be combined in one system. If you use only LRU aging, the aging thread wakes up based on the cycle specified for the whole database. If you use only time-based aging, the aging thread wakes up based on an optimal frequency. This frequency is determined by the values specified in the CYCLE
clause for all tables. If you use both LRU and time-based aging, then the thread wakes up based on a combined consideration of both types.
The following rules determine if a row is accessed or referenced for LRU aging:
Any rows used to build the result set of a SELECT
statement.
Any rows used to build the result set of an INSERT ... SELECT
statement.
Any rows that are about to be updated or deleted.
Compiled commands are marked invalid and need recompilation when you either drop LRU aging from or add LRU aging to tables that are referenced in the commands.
Call the ttAgingScheduleNow
procedure to schedule the aging process immediately regardless of the aging state.
Aging restrictions:
LRU aging and time-based aging are not supported on detail tables of materialized views.
LRU aging and time-based aging are not supported on global temporary tables.
You cannot drop the column that is used for time-based aging.
The aging policy and aging state must be the same in all sites of replication.
Tables that are related by foreign keys must have the same aging policy.
For LRU aging, if a child row is not a candidate for aging, neither this child row nor its parent row are deleted. ON DELETE CASCADE
settings are ignored.
For time-based aging, if a parent row is a candidate for aging, then all child rows are deleted. ON DELETE CASCADE
(whether specified or not) is ignored.
You can compress tables at the column level, which stores data more efficiently. This eliminates redundant storage of duplicate values within columns and improves the performance of SQL queries that perform full table scans.
You can define one or more columns in a table to be compressed, which is called a compressed column group. You can define one or more compressed column groups in each table.
A dictionary table is created for each compressed column group that contains a column with all the distinct values of the compressed column group. The compressed column group now contains a pointer to the row in the dictionary table for the appropriate value. The width of this pointer can be 1, 2, or 4 bytes long depending on the maximum number of entries you defined for the dictionary table. So if the column being compressed was wider than the pointer width, you have reduced the amount of space used by the table.
Figure 6-1 shows the compressed column group in the table pointing to the appropriate row in the dictionary table.
The dictionary table has a column of pointers to each of the distinct values. When the user configures the maximum number of distinct entries for the compressed column group, the size of the compressed column group is set as follows:
1 byte for a maximum number of entries of 255 (28-1). When the maximum number is between 1 and 255, the dictionary size is set to 255 (28-1) values and the compressed column group pointer column is 1 byte.
2 bytes for a maximum number of entries of 65,535 (216-1). When the maximum number is between 256 and 65,535, the dictionary size is set to 65,535 (216-1) values and the compressed column group pointer column is 2 bytes.
4 bytes for a maximum number of entries of 4,294,967,295 (232-1). When the maximum number is between 65,536 and 4,294,967,295, the dictionary size is set to 4,294,967,295 (232-1) values and the compressed column group pointer column is 4 bytes. This is the default.
The syntax for TableCompression
is:
[COMPRESS (CompressColumns [,...])] OPTIMIZED FOR READ
The CompressColumns
syntax is as follows:
{ColumnDefinition | (ColumnDefinition [,...])} BY DICTIONARY [MAXVALUES = CompressMax]
TableCompression
syntax has the following parameters:
Parameter | Description |
---|---|
COMPRESS ( CompressColumns [,...]) |
Defines a compressed column group for a table that is enabled for compression. This can include one or more columns in the table. However, a column can be included in only one compressed column group.
Only Each compressed column group is limited to a maximum of 16 columns. |
OPTIMIZED FOR READ |
When specified on the CREATE TABLE statement, enables the table for compressed column groups. You can add compressed column groups at table creation with the CREATE TABLE statement or later with the ALTER TABLE statement. |
BY DICTIONARY |
Defines a compression dictionary for each compressed column group. |
MAXVALUES = CompressMax |
CompressMax is the total number of distinct values in the table and sets the size for the compressed column group pointer column to 1, 2, or 4 bytes and sets the size for the maximum number of entries in the dictionary table.
For the dictionary table,
The maximum size defaults to size of 232-1 if the |
Compressed column groups can be added at the time of table creation or added later using ALTER TABLE
. You can drop the entire compressed column group with the ALTER TABLE
statement.
You can create a primary or unique key, where part or all of the columns included in the key are compressed. For compressed columns included in a primary or unique key, you can include columns that exist within a compressed column group, but you do not have to include all of the columns within the compressed column group. In addition, you can include columns from different compressed column groups.
Indexes can be created on any columns in the table. This includes compressed columns and includes columns that exist in separate compression column groups.
LOB columns cannot be compressed.
Compression is not supported on columns in replicated tables, cache group tables, grid tables, or on global temporary tables. You cannot create a table with the CREATE TABLE AS SELECT
statement when defining in-memory columnar compression for that table in that statement.
You cannot create materialized views and materialized view logs on tables enabled for compression.
A range index is created on partnumber
because it is the primary key.
Command> CREATE TABLE price > (partnumber INTEGER NOT NULL PRIMARY KEY, > vendornumber INTEGER NOT NULL, > vendpartnum CHAR(20) NOT NULL, > unitprice DECIMAL(10,2), > deliverydays SMALLINT, > discountqty SMALLINT); Command> INDEXES price; Indexes on table SAMPLEUSER.PRICE: PRICE: unique range index on columns: PARTNUMBER 1 index found. 1 index found on 1 table.
A hash index is created on column clubname
, the primary key.
CREATE TABLE recreation.clubs (clubname CHAR(15) NOT NULL PRIMARY KEY, clubphone SMALLINT, activity CHAR(18)) UNIQUE HASH ON (clubname) PAGES = 30;
A range index is created on the two columns membername
and club
because together they form the primary key.
Command> CREATE TABLE recreation.members > (membername CHAR(20) NOT NULL, > club CHAR(15) NOT NULL, > memberphone SMALLINT, > PRIMARY KEY (membername, club)); Command> INDEXES recreation.members; Indexes on table RECREATION.MEMBERS: MEMBERS: unique range index on columns: MEMBERNAME CLUB 1 index found on 1 table.
No hash index is created on the table recreation.events
.
CREATE TABLE recreation.events (sponsorclub CHAR(15), event CHAR(30), coordinator CHAR(20), results VARBINARY(10000));
A hash index is created on the column vendornumber
.
CREATE TABLE purchasing.vendors (vendornumber INTEGER NOT NULL PRIMARY KEY, vendorname CHAR(30) NOT NULL, contactname CHAR(30), phonenumber CHAR(15), vendorstreet CHAR(30) NOT NULL, vendorcity CHAR(20) NOT NULL, vendorstate CHAR(2) NOT NULL, vendorzipcode CHAR(10) NOT NULL, vendorremarks VARCHAR(60)) UNIQUE HASH ON (vendornumber) PAGES = 101;
A hash index is created on the columns membername
and club
because together they form the primary key.
CREATE TABLE recreation.members (membername CHAR(20) NOT NULL, club CHAR(15) NOT NULL, memberphone SMALLINT, PRIMARY KEY (membername, club)) UNIQUE HASH ON (membername, club) PAGES = 100;
A hash index is created on the columns firstname
and lastname
because together they form the primary key in the table authors
. A foreign key is created on the columns authorfirstname
and authorlastname
in the table books
that references the primary key in the table authors
.
CREATE TABLE authors (firstname VARCHAR(255) NOT NULL, lastname VARCHAR(255) NOT NULL, description VARCHAR(2000), PRIMARY KEY (firstname, lastname)) UNIQUE HASH ON (firstname, lastname) PAGES=20; CREATE TABLE books (title VARCHAR(100), authorfirstname VARCHAR(255), authorlastname VARCHAR(255), price DECIMAL(5,2), FOREIGN KEY (authorfirstname, authorlastname) REFERENCES authors(firstname, lastname));
The following statement overrides the default character of VARCHAR
columns and creates a table where one VARCHAR (10)
column is NOT INLINE
and one VARCHAR (144)
is INLINE
.
CREATE TABLE t1 (c1 VARCHAR(10) NOT INLINE NOT NULL, c2 VARCHAR(144) INLINE NOT NULL);
The following statement creates a table with a UNIQUE
column for book titles.
CREATE TABLE books (title VARCHAR(100) UNIQUE, authorfirstname VARCHAR(255), authorlastname VARCHAR(255), price DECIMAL(5,2), FOREIGN KEY (authorfirstname, authorlastname) REFERENCES authors(firstname, lastname));
The following statement creates a table with a default value of 1 on column x1
and a default value of SYSDATE
on column d
.
CREATE TABLE t1 (x1 INT DEFAULT 1, d TIMESTAMP DEFAULT SYSDATE);
This example creates the rangex
table and defines col1
as the primary key. A range index is created by default.
Command> CREATE TABLE rangex (col1 TT_INTEGER PRIMARY KEY); Command> INDEXES rangex; Indexes on table SAMPLEUSER.RANGEX: RANGEX: unique range index on columns: COL1 1 index found 1 index found on 1 table.
The following statement illustrates the use of the ON DELETE CASCADE
clause for parent/child tables of the HR
schema. Tables with foreign keys have been altered to enable ON DELETE CASCADE
.
ALTER TABLE countries ADD CONSTRAINT countr_reg_fk FOREIGN KEY (region_id) REFERENCES regions(region_id) ON DELETE CASCADE; ALTER TABLE locations ADD CONSTRAINT loc_c_id_fk FOREIGN KEY (country_id) REFERENCES countries(country_id) ON DELETE CASCADE; ALTER TABLE departments ADD CONSTRAINT dept_loc_fk FOREIGN KEY (location_id) REFERENCES locations (location_id) ON DELETE CASCADE; ALTER TABLE employees ADD CONSTRAINT emp_dept_fk FOREIGN KEY (department_id) REFERENCES departments ON DELETE CASCADE; ALTER TABLE employees ADD CONSTRAINT emp_job_fk FOREIGN KEY (job_id) REFERENCES jobs (job_id); ALTER TABLE job_history ADD CONSTRAINT jhist_job_fk FOREIGN KEY (job_id) REFERENCES jobs; ALTER TABLE job_history ADD CONSTRAINT jhist_emp_fk FOREIGN KEY (employee_id) REFERENCES employees ON DELETE CASCADE; ALTER TABLE job_history ADD CONSTRAINT jhist_dept_fk FOREIGN KEY (department_id) REFERENCES departments ON DELETE CASCADE; ;
This example shows how time resolution works with aging.
If lifetime is 3 days (resolution is in days):
If (SYSDATE -
ColumnValue
) <= 3
, do not age.
If (SYSDATE -
ColumnValue
) > 3
, then the row is a candidate for aging.
If (SYSDATE -
ColumnValue
) = 3 days
, 22 hours. The row is not aged out if you specified a lifetime of 3 days. The row would be aged out if you had specified a lifetime of 72 hours.
This example creates a table with LRU aging. Aging state is ON
by default.
CREATE TABLE agingdemo (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20) ) AGING LRU; Command> DESCRIBE agingdemo; Table USER.AGINGDEMO: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGING LRU ON 1 table found. (primary key columns are indicated with *)
This example creates a table with time-based aging. Lifetime is 3 days. Cycle is not specified, so the default is 5 minutes. Aging state is OFF
.
CREATE TABLE agingdemo2 (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20), agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; Command> DESCRIBE agingdemo2; Table USER.AGINGDEMO2: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging use AGINGCOLUMN lifetime 3 days cycle 5 minutes off 1 table found. (primary key columns are indicated with *)
This example generates an error message. It illustrates that after you create an aging policy, you cannot change it. You must drop aging and redefine aging.
CREATE TABLE agingdemo2 (agingid NUMBER NOT NULL PRIMARY KEY, name VARCHAR2 (20), agingcolumn TIMESTAMP NOT NULL ) AGING USE agingcolumn LIFETIME 3 DAYS OFF; ALTER TABLE agingdemo2 ADD AGING LRU; 2980: Cannot add aging policy to a table with an existing aging policy. Have to drop the old aging first The command failed. DROP aging on the table and redefine with LRU aging. ALTER TABLE agingdemo2 DROP AGING; ALTER TABLE agingdemo2 ADD AGING LRU; Command> DESCRIBE agingdemo2; Table USER.AGINGDEMO2: Columns: *AGINGID NUMBER NOT NULL NAME VARCHAR2 (20) INLINE AGINGCOLUMN TIMESTAMP (6) NOT NULL Aging lru on 1 table found. (primary key columns are indicated with *)
Attempt to create a table with time-based aging. Define aging column with data type TT_DATE
and LIFETIME
3 hours. An error is generated because the LIFETIME
unit must be expressed as DAYS
.
Command> CREATE TABLE aging1 (col1 TT_INTEGER PRIMARY KEY, col2 TT_DATE NOT NULL) AGING USE col2 LIFETIME 3 HOURS; 2977: Only DAY lifetime unit is allowed with a TT_DATE column The command failed.
Use AS
SelectQuery
clause to create the table emp
. Select last_name
from the employees
table where employee_id
between 100 and 105. You see 6 rows inserted into emp
. First issue the SELECT
statement to see rows that should be returned.
Command> SELECT last_name FROM employees WHERE employee_id BETWEEN 100 AND 105; < King > < Kochhar > < De Haan > < Hunold > < Ernst > < Austin > 6 rows found. Command> CREATE TABLE emp AS SELECT last_name FROM employees >WHERE employee_id BETWEEN 100 AND 105; 6 rows inserted. Command> SELECT * FROM emp; < King > < Kochhar > < De Haan > < Hunold > < Ernst > < Austin > 6 rows found.
Use AS
SelectQuery
to create table totalsal
. Sum salary
and insert result into totalsalary
. Define alias s
for SelectQuery
expression.
Command> CREATE TABLE totalsal AS SELECT SUM (salary) s FROM employees; 1 row inserted. Command> SELECT * FROM totalsal; < 691400 > 1 row found.
Use AS
SelectQuery
to create table defined with column commission_pct
. Set default to .3. First describe table employees
to show that column commission_pct
is of type NUMBER (2,2)
. For table c_pct
, column commission_pct
inherits type NUMBER (2,2)
from column commission_pct
of employees
table.
Command> DESCRIBE employees; Table SAMPLEUSER.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE UNIQUE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) 1 table found. (primary key columns are indicated with *) Command> CREATE TABLE c_pct (commission_pct DEFAULT .3) AS SELECT commission_pct FROM employees; 107 rows inserted. Command> DESCRIBE c_pct; Table SAMPLEUSER.C_PCT: Columns: COMMISSION_PCT NUMBER (2,2) DEFAULT .3 1 table found. (primary key columns are indicated with *)
The following example creates the employees
table where the job_id
is compressed.
Command> CREATE TABLE EMPLOYEES (EMPLOYEE_ID NUMBER (6) PRIMARY KEY, FIRST_NAME VARCHAR2(20), LAST_NAME VARCHAR2(25) NOT NULL, EMAIL VARCHAR2(25) NOT NULL, PHONE_NUMBER VARCHAR2(20), HIRE_DATE DATE NOT NULL, JOB_ID VARCHAR2(10) NOT NULL, SALARY NUMBER (8,2), COMMISSION_PCT NUMBER (2,2), MANAGER_ID NUMBER(6), DEPARTMENT_ID NUMBER(4)) COMPRESS (JOB_ID BY DICTIONARY) OPTIMIZED FOR READ; Command> DESCRIBE EMPLOYEES; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE NOT NULL EMAIL VARCHAR2 (25) INLINE NOT NULL PHONE_NUMBER VARCHAR2 (20) INLINE HIRE_DATE DATE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL SALARY NUMBER (8,2) COMMISSION_PCT NUMBER (2,2) MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( JOB_ID BY DICTIONARY ) OPTIMIZED FOR READ 1 table found. (primary key columns are indicated with *)
The following example shows that there are three dictionary table sizes. The value you specify for the maximum number of entries is rounded up to the next size. For example, specifying 400 as the maximum number of job IDs creates a dictionary table that can have at most 65535 entries. The default size of 232-1 is not shown in the DESCRIBE
output.
Command> CREATE TABLE employees (employee_id NUMBER(6) PRIMARY KEY, first_name VARCHAR2(20), last_name VARCHAR2(25), email VARCHAR2(25) NOT NULL, job_id VARCHAR2(10) NOT NULL, manager_id NUMBER(6), department_id NUMBER(4)) COMPRESS (last_name BY DICTIONARY MAXVALUES=70000, job_id BY DICTIONARY MAXVALUES=400, department_id BY DICTIONARY MAXVALUES=100) OPTIMIZED FOR READ; Command> DESCRIBE employees; Table MYSCHEMA.EMPLOYEES: Columns: *EMPLOYEE_ID NUMBER (6) NOT NULL FIRST_NAME VARCHAR2 (20) INLINE LAST_NAME VARCHAR2 (25) INLINE EMAILS VARCHAR2 (25) INLINE NOT NULL JOB_ID VARCHAR2 (10) INLINE NOT NULL MANAGER_ID NUMBER (6) DEPARTMENT_ID NUMBER (4) COMPRESS ( LAST_NAME BY DICTIONARY, JOB_ID BY DICTIONARY MAXVALUES=65535, DEPARTMENT_ID BY DICTIONARY MAXVALUES=255 ) OPTIMIZED FOR READ 1 table found. (primary key columns are indicated with *)
ALTER TABLE
DROP TABLE
TRUNCATE TABLE
UPDATE
The CREATE USER
statement creates a user of a TimesTen database.
ADMIN
CREATE USER user IDENTIFIED BY {password | "password"} CREATE USER user IDENTIFIED EXTERNALLY
Parameter | Description |
---|---|
user |
Name of the user that is being added to the database. |
IDENTIFIED |
Identification clause. |
BY { password |" password "} |
Internal users must be given a TimesTen password. To perform database operations using an internal user name, the user must supply this password. |
EXTERNALLY |
Identifies the operating system user to the TimesTen database. To perform database operations as an external user, the process needs a TimesTen external user name that matches the user name authenticated by the operating system or network. A password is not required by TimesTen because the user has been authenticated by the operating system at login time. |
Database users can be internal or external.
Internal users are defined for a TimesTen database.
External users are defined by an external authority such as the operating system. External users cannot be assigned a TimesTen password.
Passwords are case-sensitive.
When a user is created, the user has the privileges granted to PUBLIC
and no additional privileges.
You cannot create a user across a client/server connection. You must use a direct connection when creating a user.
In TimesTen, user brad
is the same as user "brad"
. In both cases, the name of the user is created as BRAD
.
User names are TT_CHAR
data type.
When replication is configured, this statement is replicated.
To create the internal user terry
with the password "secret"
:
CREATE USER terry IDENTIFIED BY "secret"; User created.
Verify that user terry
has been created:
Command> SELECT * FROM sys.all_users WHERE username='TERRY'; < TERRY, 11, 2009-05-12 10:28:04.610353 > 1 row found.
To identify the external user pat
to the TimesTen database:
CREATE USER pat IDENTIFIED EXTERNALLY; User created.
ALTER USER
DROP USER
GRANT
REVOKE
The CREATE VIEW
statement creates a view of the tables specified in the SelectQuery
clause. A view is a logical table that is based on one or more detail tables. The view itself contains no data. It is sometimes called a nonmaterialized view to distinguish it from a materialized view, which does contain data that has already been calculated from detail tables.
The user executing the statement must have the CREATE VIEW
privilege (if owner) or CREATE ANY VIEW
(if not the owner) for another user's view.
The owner of the view must have the SELECT
privilege on the detail tables.
CREATE VIEW [Owner.]ViewName AS SelectQuery
Parameter | Description |
---|---|
[ Owner .] ViewName |
Name assigned to the new view. |
SelectQuery |
Selects column from the detail tables to be used in the view. Can also create indexes on the view. |
Restrictions on the SELECT query
There are several restrictions on the query that is used to define the view.
A SELECT *
query in a view definition is expanded when the view is created. Any columns added after a view is created do not affect the view.
Do not use the following in a SELECT
statement that is used to create a view:
FIRST
ORDER BY
If used, this is ignored by CREATE VIEW
. The result will not be sorted.
Arguments
Each expression in the select list must have a unique name. A name of a simple column expression would be that column's name unless a column alias is defined. ROWID
is considered an expression and needs an alias.
Do not use SELECT FOR UPDATE
to create a view.
Certain TimesTen query restrictions are not checked when a non-materialized view is created. Views that violate those restrictions may be allowed to be created, but an error is returned when the view is referenced later in an executed statement.
When a view is referenced in the FROM
clause of a SELECT
statement, its name is replaced by its definition as a derived table at parsing time. If it is not possible to merge all clauses of a view to the same clause in the original select query to form a legal query without the derived table, the content of this derived table is materialized. For example, if both the view and the referencing select specify aggregates, the view is materialized before its result can be joined with other tables of the select.
Use the DROP [MATERIALIZED] VIEW
statement to drop a view.
A view cannot be altered with an ALTER TABLE
statement.
Referencing a view can fail because of dropped or altered detail tables.
Create a nonmaterialized view from the employees
table.
Command> CREATE VIEW v1 AS SELECT employee_id, email FROM employees; Command> SELECT FIRST 5 * FROM v1; < 100, SKING > < 101, NKOCHHAR > < 102, LDEHAAN > < 103, AHUNOLD > < 104, BERNST > 5 rows found.
Create a nonmaterialized view tview
with column max1
from an aggregate query on the table t1
.
CREATE VIEW tview (max1) AS SELECT MAX(x1) FROM t1;
CREATE MATERIALIZED VIEW
CREATE TABLE
DROP [MATERIALIZED] VIEW
The DELETE
statement deletes rows from a table.
No privilege is required for the table owner.
DELETE
on the table for another user's table.
DELETE [hint] [FIRST NumRows] FROM [Owner.]TableName [CorrelationName] [WHERE SearchCondition] [RETURNING|RETURN Expression[,...]INTO DataItem[,...]]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the DELETE statement. For more information on hints, see "Statement level optimizer hints". |
FIRST NumRows |
Specifies the number of rows to delete. FIRST NumRows is not supported in subquery statements. NumRows must be either a positive INTEGER or a dynamic parameter placeholder. The syntax for a dynamic parameter placeholder is either ? or :DynamicParameter . The value of the dynamic parameter is supplied when the statement is executed. |
[ Owner .] TableName [CorrelationName ] |
Designates a table from which any rows satisfying the search condition are to be deleted.
|
SearchCondition |
Specifies which rows are to be deleted. If no rows satisfy the search condition, the table is not changed. If the WHERE clause is omitted, all rows are deleted. The search condition can contain a subquery. |
Expression |
Valid expression syntax. See Chapter 3, "Expressions". |
DataItem |
Host variable or PL/SQL variable that stores the retrieved Expression value. |
If all the rows of a table are deleted, the table is empty but continues to exist until you issue a DROP TABLE
statement.
The DELETE
operation fails if it violates any foreign key constraint. See "CREATE TABLE" for a description of the foreign key constraint.
The total number of rows reported by the DELETE
statement does not include rows deleted from child tables as a result of the ON DELETE CASCADE
action.
If ON DELETE CASCADE
is specified on a foreign key constraint for a child table, a user can delete rows from a parent table for which the user has the DELETE
privilege without requiring explicit DELETE
privilege on the child table.
Restrictions on the RETURNING
clause:
Each Expression
must be a simple expression. Aggregate functions are not supported.
You cannot return a sequence number into an OUT
parameter.
ROWNUM
and subqueries cannot be used in the RETURNING
clause.
Parameters in the RETURNING
clause cannot be duplicated anywhere in the DELETE
statement.
Using the RETURNING
clause to return multiple rows requires PL/SQL BULK COLLECT
functionality. See Oracle TimesTen In-Memory Database PL/SQL Developer's Guide.
In PL/SQL, you cannot use a RETURNING
clause with a WHERE CURRENT
operation.
Rows for orders whose quantity is less than 50 are deleted.
DELETE FROM purchasing.orderitems WHERE quantity < 50;
The following query deletes all the duplicate orders assuming that id
is not a primary key:
DELETE FROM orders a WHERE EXISTS (SELECT 1 FROM orders b WHERE a.id = b.id and a.rowid < b.rowid);
The following sequence of statements causes a foreign key violation.
CREATE TABLE master (name CHAR(30), id CHAR(4) NOT NULL PRIMARY KEY); CREATE TABLE details (masterid CHAR(4),description VARCHAR(200), FOREIGN KEY (masterid) REFERENCES master(id)); INSERT INTO master('Elephant', '0001'); INSERT INTO details('0001', 'A VERY BIG ANIMAL'); DELETE FROM master WHERE id = '0001';
If you attempt to delete a "busy" table, an error results. In this example, t1
is a "busy" table that is a parent table with foreign key constraints based on it.
CREATE TABLE t1 (a INT NOT NULL, b INT NOT NULL, PRIMARY KEY (a)); CREATE TABLE t2 (c INT NOT NULL, FOREIGN KEY (c) REFERENCES t1(a)); INSERT INTO t1 VALUES (1,1); INSERT INTO t2 VALUES (1); DELETE FROM t1;
An error is returned:
SQL ERROR (3001): Foreign key violation [TTFOREIGN_0] a row in child table T2 has a parent in the delete range.
Delete an employee from employees
. Declare empid
and name
as variables with the same data types as employee_id
and last_name
. Delete the row, returning employee_id
and last_name
into the variables. Verify that the correct row was deleted.
Command> VARIABLE empid NUMBER(6) NOT NULL; Command> VARIABLE name VARCHAR2(25) INLINE NOT NULL; Command> DELETE FROM employees WHERE last_name='Ernst' > RETURNING employee_id, last_name INTO :empid,:name; 1 row deleted. Command> PRINT empid name; EMPID : 104 NAME : Ernst
This statement drops an active standby pair replication scheme.
ADMIN
DROP ACTIVE STANDBY PAIR
DROP ACTIVE STANDBY PAIR
has no parameters.
The active standby pair is dropped, but all objects such as tables, cache groups, and materialized views still exist on the database on which the statement was issued.
You cannot execute the DROP ACTIVE STANDBY PAIR
statement when Oracle Clusterware is used with TimesTen.
ALTER ACTIVE STANDBY PAIR
CREATE ACTIVE STANDBY PAIR
The DROP CACHE GROUP
statement drops the table associated with the cache group, and removes the cache group definition from the CACHE_GROUP
system table.
No privilege is required for the cache group owner.
If not the cache group owner, DROP ANY CACHE GROUP
and
DROP ANY TABLE
if at least one table in the cache group is not owned by the current user.
DROP CACHE GROUP [Owner.]GroupName
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name of the cache group to be deleted. |
If you attempt to delete a cache group table that is in use, TimesTen returns an error.
Asynchronous writethrough cache groups cannot be dropped while the replication agent is running.
Automatically installed Oracle Database objects for read-only cache groups and cache groups with the AUTOREFRESH
attribute are uninstalled by the cache agent. If the cache agent is not running during the DROP CACHE GROUP
operation, the Oracle Database objects are uninstalled on the next startup of the cache agent.
You cannot execute the DROP CACHE GROUP
statement when performed under the serializable isolation level. An error message is returned when attempted.
If you issue a DROP CACHE GROUP
statement, and there is an autorefresh operation currently running, then:
If LockWait
interval is 0, the DROP CACHE GROUP
statement fails with a lock timeout error.
If LockWait
interval is nonzero, then the current autorefresh transaction is preempted (rolled back), and the DROP
statement continues. This affects all cache groups with the same autorefresh interval.
DROP CACHE GROUP westerncustomers;
ALTER CACHE GROUP
CREATE CACHE GROUP
The DROP FUNCTION
statement removes a standalone stored function from the database. Do not use this statement to remove a function that is part of a package.
No privilege is required for the function owner.
DROP ANY PROCEDURE
for another user's function.
DROP FUNCTION [Owner.]FunctionName
Parameter | Description |
---|---|
[ Owner .] FunctionName |
Name of the function to be dropped. |
When you drop a function, TimesTen invalidates objects that depend on the dropped function. If you subsequently reference one of these objects, TimesTen attempts to recompile the object and returns an error message if you have not re-created the dropped function.
Do not use this statement to remove a function that is part of a package. Either drop the package or redefine the package without the function using the CREATE PACKAGE
statement with the OR REPLACE
clause.
To use the DROP FUNCTION
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, an error is thrown.
The following statement drops the function myfunc
and invalidates all objects that depend on myfunc
:
Command> DROP FUNCTION myfunc; Function dropped.
If PL/SQL is not enabled in your database, TimesTen returns an error:
Command> DROP FUNCTION myfunc; 8501: PL/SQL feature not installed in this TimesTen database The command failed.
The DROP INDEX
statement removes the specified index.
No privilege is required for the index owner.
DROP ANY INDEX
for another user's index.
DROP INDEX [Owner.]IndexName [FROM [Owner.]TableName]
If you attempt to drop a "busy" index—an index that is in use or that enforces a foreign key—an error results. To drop a foreign key and the index associated with it, use the ALTER TABLE
statement.
If an index is created through a UNIQUE
column constraint, it can only be dropped by dropping the constraint with an ALTER TABLE
DROP UNIQUE
statement. See "CREATE TABLE" for more information about the UNIQUE
column constraint.
If a DROP INDEX
operation is or was active in an uncommitted transaction, other transactions doing DML operations that do not access that index are blocked.
If an index is dropped, any prepared statement that uses the index is prepared again automatically the next time the statement is executed.
If no table name is specified, the index name must be unique for the specified or implicit owner. The implicit owner, in the absence of a specified table or owner, is the current user running the program.
If no index owner is specified and a table is specified, the default owner is the table owner.
If a table is specified and no owner is specified for it, the default table owner is the current user running the program.
The table and index owners must be the same.
An index on a temporary table cannot be dropped by a connection if some other connection has an instance of the table that is not empty.
If the index is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 2, use the DROP INDEX
statement to drop the index from the standby pair in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
Drop index partsorderedindex
which is defined on table orderitems
using one of the following:
DROP INDEX partsorderedindex FROM purchasing.orderitems;
or
DROP INDEX purchasing.partsorderedindex;
The DROP [MATERIALIZED] VIEW
statement removes the specified view, including any hash indexes and any range indexes associated with it.
View owner or DROP ANY [MATERIALIZED] VIEW
(if not owner) and
Table owner or DROP ANY TABLE
(if not owner) and
Index owner or DROP ANY INDEX
(if not owner) if there is an index on the view.
DROP [MATERIALIZED] VIEW [Owner.]ViewName
Parameter | Description |
---|---|
MATERIALIZED |
Specifies that the view is materialized. |
[ Owner .] ViewName |
Identifies the view to be dropped. |
When you perform a DROP VIEW
operation on a materialized view, the detail tables are updated and locked. An error may result if the detail table was already locked by another transaction.
The following statement drops the custorder
view.
DROP VIEW custorder;
CREATE MATERIALIZED VIEW
CREATE VIEW
The DROP MATERIALIZED VIEW LOG
statement removes the materialized view log for a detail table. It also drops the global temporary table that was created by the CREATE MATERIALIZED VIEW LOG
statement.
No privilege is required for the table owner.
DROP ANY TABLE
for another user's table.
DROP MATERIALIZED VIEW LOG ON [Owner.]TableName
Parameter | Description |
---|---|
[ Owner .] TableName |
Name of the detail table for which the materialized view log was created. |
This statement drops the materialized view log for the specified detail table. The materialized view log cannot be dropped if there is an asynchronous materialized view that depends on the log for refreshing.
DROP MATERIALIZED VIEW LOG ON employees;
CREATE MATERIALIZED VIEW LOG
CREATE MATERIALIZED VIEW
DROP [MATERIALIZED] VIEW
The DROP PACKAGE
statement removes a stored package from the database. Both the specification and the body are dropped. DROP PACKAGE BODY
removes only the body of the package.
No privilege is required for the package owner.
DROP ANY PROCEDURE
for another user's package.
DROP PACKAGE [BODY] [Owner.]PackageName
Parameter | Description |
---|---|
PACKAGE [BODY] |
Specify BODY to drop only the body of the package. Omit BODY to drop both the specification and body of the package. |
[ Owner .] PackageName |
Name of the package to be dropped. |
When you drop only the body of the package, TimesTen does not invalidate dependent objects. However, you cannot execute one of the procedures or stored functions declared in the package specification until you re-create the package body.
TimesTen invalidates any objects that depend on the package specification. If you subsequently reference one of these objects, then TimesTen tries to recompile the object and returns an error if you have not re-created the dropped package.
Do not use this statement to remove a single object from the package. Instead, re-create the package without the object using the CREATE PACKAGE
and CREATE PACKAGE BODY
statements with the OR REPLACE
clause.
To use the DROP PACKAGE [BODY]
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, TimesTen returns an error.
The following statement drops the body of package samplePackage
:
Command> DROP PACKAGE BODY SamplePackage; Package body dropped.
To drop both the specification and body of package samplepackage
:
Command> DROP PACKAGE samplepackage; Package dropped.
The DROP PROCEDURE
statement removes a standalone stored procedure from the database. Do not use this statement to remove a procedure that is part of a package.
No privilege is required for the procedure owner.
DROP ANY PROCEDURE
for another user's procedure.
DROP PROCEDURE [Owner.]ProcedureName
Parameter | Description |
---|---|
[ Owner .] ProcedureName |
Name of the procedure to be dropped. |
When you drop a procedure, TimesTen invalidates objects that depend on the dropped procedure. If you subsequently reference one of these objects, TimesTen attempts to recompile the object and returns an error message if you have not re-created the dropped procedure.
Do not use this statement to remove a procedure that is part of a package. Either drop the package or redefine the package without the procedure using the CREATE PACKAGE
statement with the OR REPLACE
clause.
To use the DROP PROCEDURE
statement, you must have PL/SQL enabled in your database. If you do not have PL/SQL enabled in your database, an error is thrown.
The following statement drops the procedure myproc
and invalidates all objects that depend on myproc
:
Command> DROP PROCEDURE myproc; Procedure dropped.
If PL/SQL is not enabled in your database, TimesTen returns an error:
Command> DROP PROCEDURE myproc; 8501: PL/SQL feature not installed in this TimesTen databaseThe command failed.
The DROP REPLICATION
statement destroys a classic replication scheme and removes it from the executing database.
ADMIN
DROP REPLICATION [Owner.]ReplicationSchemeName
Parameter | Description |
---|---|
[ Owner .] ReplicationSchemeName |
Name assigned to the classic replication scheme. |
Dropping the last replication scheme on a database does not delete the replicated tables. These tables exist and persist at a database whether any replication schemes are defined.
The following statement erases the executing database's knowledge of a classic replication scheme, r
:
DROP REPLICATION r;
ALTER REPLICATION
CREATE REPLICATION
The DROP SEQUENCE
statement removes an existing sequence number generator.
No privilege is required for the sequence owner.
DROP ANY SEQUENCE
for another user's sequence.
DROP SEQUENCE [Owner.]SequenceName
Parameter | Description |
---|---|
[ Owner .] SequenceName |
Name of the sequence number generator. |
Sequences can be dropped while they are in use.
There is no ALTER SEQUENCE
statement in TimesTen. To alter a sequence, use the DROP SEQUENCE
statement and then create a new sequence with the same name. For example, to change the MINVALUE
, drop the sequence and re-create it with the same name and with the desired MINVALUE
.
If the sequence is part of a replication scheme, use the ALTER REPLICATION
statement to drop the sequence from the replication scheme. Then use the DROP SEQUENCE
statement to drop the sequence.
The following statement drops mysequence
:
DROP SEQUENCE mysequence;
The DROP SYNONYM
statement removes a synonym from the database.
If the synonym is replicated across an active standby pair and if DDL_REPLICATION_LEVEL
is 2, use the DROP SYNONYM
statement to drop the synonym from the active standby pair in the replication scheme. See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
No privilege is required to drop the private synonym by its owner. The DROP ANY SYNONYM
privilege is required to drop another user's private synonym.
The DROP PUBLIC SYNONYM
privilege is required to drop a PUBLIC
synonym.
To drop a private synonym, use the following syntax:
DROP SYNONYM [Owner.]SynonymName
To drop a public synonym:
DROP PUBLIC SYNONYM SynonymName
Parameter | Description |
---|---|
PUBLIC |
Specify PUBLIC to drop a public synonym. |
Owner |
Optionally, specify the owner for a private synonym. If you omit the owner, the private synonym must exist in the current user's schema. |
SynonymName |
Specify the name of the synonym to be dropped. |
Drop the public synonym pubemp
:
DROP PUBLIC SYNONYM pubemp; Synonym dropped.
Drop the private synjobs
synonym:
DROP SYNONYM synjobs; Synonym dropped.
As user terry
with DROP ANY SYNONYM
privilege, drop the private syntab
synonym owned by ttuser
.
DROP SYNONYM ttuser.syntab; Synonym dropped.
The DROP TABLE
statement removes the specified table, including any hash indexes and any range indexes associated with it.
No privilege is required for the table owner.
DROP ANY TABLE
for another user's table.
DROP TABLE [Owner.]TableName
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be dropped. |
If you attempt to drop a table that is in use, an error results.
If DROP TABLE
is or was active in an uncommitted transaction, other transactions doing DML operations that do not access that table are allowed to proceed.
If the table is a replicated table, you can do one of the following:
Use the DROP REPLICATION
statement to drop the replication scheme before issuing the DROP TABLE
statement.
If DDL_REPLICATION_LEVEL
is 2, use the DROP TABLE
statement to drop the table from the active standby pair in the replication scheme.
If DDL_REPLICATION_LEVEL
is 1, stop the replication agent and use the ALTER ACTIVE STANDBY PAIR EXCLUDE TABLE
statement to exclude the table from the replication scheme. Then use the DROP TABLE
statement to drop the table.
See "Making DDL changes in an active standby pair" in the Oracle TimesTen In-Memory Database Replication Guide for more information.
A temporary table cannot be dropped by a connection if some other connection has some non-empty instance of the table.
CREATE TABLE vendorperf (ordernumber INTEGER, delivday TT_SMALLINT, delivmonth TT_SMALLINT, delivyear TT_SMALLINT, delivqty TT_SMALLINT, remarks VARCHAR2(60)); CREATE UNIQUE INDEX vendorperfindex ON vendorperf (ordernumber);
The following statement drops the table and index.
DROP TABLE vendorperf;
The DROP USER
statement removes a user from the database.
ADMIN
DROP USER user
Parameter | Description |
---|---|
user |
Name of the user that is being dropped from the database. |
Before you can drop a user:
The user must exist either internally or externally in the database.
You must drop objects that the user owns.
When replication is configured, this statement is replicated.
Drop user terry
from the database:
DROP USER terry; User dropped.
CREATE USER
ALTER USER
GRANT
REVOKE
The FLUSH CACHE GROUP
statement flushes data from TimesTen cache tables to Oracle Database tables. This statement is available only for user managed cache groups. For a description of cache group types, see "User managed and system managed cache groups".
There are two variants to this operation: one that accepts a WHERE
clause, and one that accepts a WITH ID
clause.
FLUSH CACHE GROUP
is meant to be used when commit propagation (from TimesTen to Oracle Database) is turned off. Instead of propagating every transaction upon commit, many transactions can be committed before changes are propagated to Oracle Database. For each cache instance ID, if the cache instance exists in the Oracle database, the operation in the Oracle database consists of an update. If the cache instance does not exist in the Oracle database, TimesTen inserts it.
This is useful, for example, in a shopping cart application in which many changes may be made to the cart, which uses TimesTen as a high-speed cache, before the order is committed to the master Oracle database table.
Note:
Using aWITH ID
clause usually results in better system performance than using a WHERE
clause.Only inserts and updates are flushed. Inserts are propagated as inserts if the record does not exist in the Oracle database table or as updates (if the record already exists). It is not possible to flush a delete. That is, if a record is deleted on TimesTen, there is no way to "flush" that delete to the Oracle database table. Deletes must be propagated either manually or by turning commit propagation on. Attempts to flush deleted records are silently ignored. No error or warning is issued. Records from tables that are specified as READ ONLY
or PROPAGATE
cannot be flushed to the Oracle database tables.
No privilege is required for the cache group owner.
FLUSH
or FLUSH ANY CACHE GROUP
for another user's cache group.
FLUSH CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression];
or
FLUSH CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name of the cache group to be flushed. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to flush the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
WHERE
clauses are generally used to apply the operation to a set of cache instances, rather than to a single cache instance or to all cache instances. The flush operation uses the WHERE
clause to determine which cache instances to send to the Oracle database.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the FLUSH CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name. Without an owner name, all tables referenced by cache group WHERE
clauses are owned by the current login name executing the cache group operation.
When the WHERE
clause is omitted, the entire contents of the cache group is flushed to the Oracle database tables. When the WHERE
clause is included, it is allowed to include only the root table.
Following the execution of a FLUSH CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were flushed.
Use the WITH ID
clause to specify binding parameters.
Do not use the WITH ID
clause on AWT or SWT cache groups, user managed cache groups with the propagate attribute, or autorefreshed and propagated user managed cache groups unless the cache group is a dynamic cache group.
FLUSH CACHE GROUP marketbasket; FLUSH CACHE GROUP marketbasket WITH ID(10);
CREATE CACHE GROUP
The GRANT
statement assigns one or more privileges to a user.
ADMIN
to grant system privileges.
ADMIN
or the object owner to grant object privileges.
GRANT {SystemPrivilege [,...] | ALL [PRIVILEGES]} [...] TO {user |PUBLIC} [,...] GRANT {{ObjectPrivilege [,...] | ALL [PRIVILEGES]} ON {[Owner.]object}[,...]} TO {user | PUBLIC} [,...]
The following parameters are for granting system privileges:
Parameter | Description |
---|---|
SystemPrivilege |
See "System privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Assigns all system privileges to the user. |
user |
Name of the user to whom privileges are being granted. The user name must first have been introduced to the TimesTen database by a CREATE USER statement. |
PUBLIC |
Specifies that the privilege is granted to all users. |
The following parameters are for granting object privileges:
Parameter | Description |
---|---|
ObjectPrivilege |
See "Object privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Assigns all object privileges to the user. |
[ Owner .] object |
object is the name of the object on which privileges are being granted. Owner is the owner of the object. If Owner is not specified, the user who is granting the privilege is the owner. |
user |
Name of the user to whom privileges are being granted. The user must exist in the database. |
PUBLIC |
Specifies that the privilege is granted to all users. |
One or more system privileges can be granted to a user by a user with ADMIN
privilege.
One or more object privileges can be granted to a user by the owner of the object.
One or more object privileges can be granted to a user on any object by a user with ADMIN
privilege.
To remove a privilege from a user, use the REVOKE
statement.
You cannot grant system privileges and object privileges in the same statement.
Only one object can be specified in an object privilege statement.
When replication is configured, this statement is replicated.
Grant the ADMIN
privilege to the user terry
:
GRANT admin TO terry;
Assuming the grantor has ADMIN
privilege, grant the SELECT
privilege to user terry
on the customers
table owned by user pat
:
GRANT SELECT ON pat.customers TO terry;
Grant an object privilege to user terry
:
GRANT SELECT ON emp_details_view TO terry;
CREATE USER
ALTER USER
DROP USER
REVOKE
The INSERT
statement adds rows to a table.
The following expressions can be used in the VALUES
clause of an INSERT
statement:
Sequence
NEXTVAL
and Sequence
CURRVAL
DEFAULT
No privilege is required for the table owner.
INSERT
for another user's table.
INSERT [hint] INTO [Owner.]TableName [(Column [,...])] VALUES (SingleRowValues) [RETURNING|RETURN Expression[,...] INTO DataItem[,...]]
The SingleRowValues
parameter has the syntax:
{NULL|{?|:DynamicParameter}|{Constant}| DEFAULT}[,...]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the INSERT statement. For more information on hints, see "Statement level optimizer hints". |
Owner |
The owner of the table into which data is inserted. |
TableName |
Name of the table into which data is inserted. |
Column |
Each column in this list is assigned a value from SingleRowValues .
If you omit one or more of the table's columns from this list, then the value of the omitted column in the inserted row is the column default value as specified when the table was created or last altered. If any omitted column has a If you omit a list of columns completely, then you must specify values for all columns in the table. |
?
|
Placeholder for a dynamic parameter in a prepared SQL statement. The value of the dynamic parameter is supplied when the statement is executed. |
Constant |
A specific value. See "Constants". |
DEFAULT |
Specifies that the column should be updated with the default value. |
Expression |
Valid expression syntax. See Chapter 3, "Expressions". |
DataItem |
Host variable or PL/SQL variable that stores the retrieved Expression value. |
If you omit any of the table's columns from the column name list, the INSERT
statement places the default value in the omitted columns. If the table definition specifies NOT NULL
for any of the omitted columns and there is no default value, the INSERT
statement fails.
BINARY
and VARBINARY
data can be inserted in character or hexadecimal format:
Character format requires single quotes.
Hexadecimal format requires the prefix 0x
before the value.
The INSERT
operation fails if it violates a foreign key constraint. See "CREATE TABLE" for a description of the foreign key constraint.
Restrictions on the RETURNING
clause:
Each Expression
must be a simple expression. Aggregate functions are not supported.
You cannot return a sequence number into an OUT
parameter.
ROWNUM
and subqueries cannot be used in the RETURNING
clause.
Parameters in the RETURNING
clause cannot be duplicated anywhere in the INSERT
statement.
In PL/SQL, you cannot use a RETURNING
clause with a WHERE CURRENT
operation.
A new single row is added to the purchasing.vendors
table.
INSERT INTO purchasing.vendors VALUES (9016, 'Secure Systems, Inc.', 'Jane Secret', '454-255-2087', '1111 Encryption Way', 'Hush', 'MD', '00007', 'discount rates are secret');
For dynamic parameters :pno
and :pname
, values are supplied at runtime.
INSERT INTO purchasing.parts (partnumber, partname) VALUES (:pno, :pname);
Return the annual salary
and job_id
of a new employee. Declare the variables sal
and jobid
with the same data types as salary
and job_id
. Insert the row into employees
. Print the variables for verification.
Command> VARIABLE sal12 NUMBER(8,2); Command> VARIABLE jobid VARCHAR2(10) INLINE NOT NULL; Command> INSERT INTO employees(employee_id, last_name, email, hire_date, > job_id, salary) > VALUES (211,'Doe','JDOE',sysdate,'ST_CLERK',2400) > RETURNING salary*12, job_id INTO :sal12,:jobid; 1 row inserted. PRINT sal12 jobid; SAL12 : 28800 JOBID : ST_CLERK
CREATE TABLE
INSERT...SELECT
The INSERT...SELECT
statement inserts the results of a query into a table.
No privilege is required for the object owner.
INSERT
and SELECT
for another user's object.
INSERT INTO [Owner.]TableName [(ColumnName [,...])] InsertQuery
Parameter | Description |
---|---|
[ Owner .] TableName |
Table to which data is to be added. |
ColumnName |
Column for which values are supplied. If you omit any of the table's columns from the column name list, the INSERT...SELECT statement places the default value in the omitted columns. If the table definition specifies NOT NULL, without a default value, for any of the omitted columns, the INSERT...SELECT statement fails. You can omit the column name list if you provide values for all columns of the table in the same order the columns were specified in the CREATE TABLE statement. If too few values are provided, the remaining columns are assigned default values. |
InsertQuery |
Any supported SELECT query. See "SELECT". You can specify a statement level optimizer hint after the SELECT verb. For more information on statement level optimizer hints, see "Statement level optimizer hints". |
The column types of the result set must be compatible with the column types of the target table.
You can specify a sequence CURRVAL
or NEXTVAL
when inserting values. See "Incrementing SEQUENCE values with CURRVAL and NEXTVAL" for more details.
In the InsertQuery
, the ORDER BY
clause is allowed. The sort order may be modified using the ORDER BY
clause when the result set is inserted into the target table, but the order is not guaranteed.
The INSERT
operation fails if there is an error in the InsertQuery
.
A RETURNING
clause cannot be used in an INSERT...SELECT
statement.
New rows are added to the purchasing.parts
table that describe which parts are delivered in 20 days or less.
INSERT INTO purchasing.parts SELECT partnumber, deliverydays FROM purchasing.supplyprice WHERE deliverydays < 20;
The LOAD CACHE GROUP
statement loads data from Oracle database tables into a TimesTen cache group. The load operation is local. It is not propagated across cache members.
No privilege is required for the cache group owner.
LOAD CACHE GROUP
or LOAD ANY CACHE GROUP
for another user's cache group.
LOAD CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression] COMMIT EVERY n ROWS [PARALLEL NumThreads]
or
LOAD CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the cache group. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
COMMIT EVERY n ROWS |
Use the COMMIT EVERY n ROWS clause to indicate the frequency (based on the number of rows that are loaded into the cache group) at which a commit is issued during the load operation. This clause is required if you do not specify the WITH ID clause.
|
[ PARALLEL NumThreads ] |
Provides parallel loading for cache group tables. Specifies the number of loading threads to run concurrently. One thread performs the bulk fetch from the Oracle database and the other threads (NumThreads - 1 threads) perform the inserts into TimesTen. Each thread uses its own connection or transaction.
The minimum value for |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to load the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
LOAD CACHE GROUP
loads all new cache instances from the Oracle database that satisfy the cache group definition and are not yet present in the cache group.
Before issuing the LOAD CACHE GROUP
statement, ensure that the replication agent is running if the cache group is replicated or is an AWT cache group. Make sure the cache agent is running.
LOAD CACHE GROUP
is executed in its own transaction, and must be the first operation in a transaction.
For an explicitly loaded cache group, LOAD CACHE GROUP
does not update cache instances that are already present in the TimesTen cache tables. Therefore, LOAD CACHE GROUP
loads only inserts on the Oracle database tables into the corresponding TimesTen cache tables.
For a dynamic cache group, LOAD CACHE GROUP
loads cache instances that have been inserted, updated and deleted on the Oracle database tables into the cache tables. For more information about explicitly loaded and dynamic cache groups, see Oracle In-Memory Database Cache User's Guide.
Errors cause a rollback. When cache instances are committed periodically, errors abort the remainder of the load. The load is rolled back to the last commit.
If the LOAD CACHE GROUP
statement fails when you specify COMMIT EVERY
n
ROWS
(where n
is greater than 0), the content of the target cache group could be in an inconsistent state since some loaded rows are already committed. Some cache instances may be partially loaded. Use the UNLOAD CACHE GROUP
statement to unload the cache group, then reload the cache group.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the LOAD CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
When loading a read-only cache group:
The AUTOREFRESH
state must be paused.
The LOAD CACHE GROUP
statement cannot have a WHERE
clause (except on a dynamic cache group).
The cache group must be empty.
If the automatic refresh state of a cache group (explicitly loaded or dynamic) is PAUSED
, the state is changed to ON
after a LOAD CACHE GROUP
statement that was issued on the cache group completes.If the automatic refresh state of a dynamic cache group is PAUSED
and the cache tables are populated, the state remains PAUSED
after a LOAD CACHE GROUP
statement that was issued on the cache group completes.
Following the execution of a LOAD CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were loaded.
Use the WITH ID
clause as follows:
In place of the WHERE
clause for faster loading of the cache instance
To specify binding parameters
To roll back the load transaction upon failure
Do not reference child tables in the WHERE
clause.
Do not specify the PARALLEL
clause in the following circumstances:
With the WITH ID
clause
With the COMMIT EVERY 0 ROWS
clause
When database level locking is enabled (connection attribute LockLevel
is set to 1)
Do not use the WITH ID
clause when loading these types of cache groups:
Explicitly loaded read-only cache group
Explicitly loaded user managed cache group with the autorefresh attribute
User managed cache group with the AUTOREFRESH
and PROPAGATE
attributes
Do not use the WITH ID
clause with the COMMIT EVERY
n
ROWS
clause.
The WITH ID
clause cannot be used to acquire a cache instance from another cache grid member.
CREATE CACHE GROUP recreation.cache FROM recreation.clubs ( clubname CHAR(15) NOT NULL, clubphone SMALLINT, activity CHAR(18), PRIMARY KEY(clubname)) WHERE (recreation.clubs.activity IS NOT NULL); LOAD CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Use the HR
schema to illustrate the use of the PARALLEL
clause with the LOAD CACHE GROUP
statement. The COMMIT EVERY
n
ROWS
clause is required. Issue the CACHEGROUPS
command. You see cache group cg2
is defined and the autorefresh state is paused. Unload cache group cg2
, then specify the LOAD CACHE GROUP
statement with the PARALLEL
clause to provide parallel loading. You see 25 cache instances loaded.
Command> CACHEGROUPS; Cache Group SAMPLEUSER.CG2: Cache Group Type: Read Only Autorefresh: Yes Autorefresh Mode: Incremental Autorefresh State: Paused Autorefresh Interval: 1.5 Minutes Root Table: SAMPLEUSER.COUNTRIES Table Type: Read Only Child Table: SAMPLEUSER.LOCATIONS Table Type: Read Only Child Table: SAMPLEUSER.DEPARTMENTS Table Type: Read Only 1 cache group found. Command> UNLOAD CACHE GROUP cg2; 25 cache instances affected. Command> COMMIT; Command> LOAD CACHE GROUP cg2 COMMIT EVERY 10 ROWS PARALLEL 2; 25 cache instances affected. Command> COMMIT;
The following example loads only the cache instances for customers whose customer number is greater than or equal to 5000 into the TimesTen cache tables in the new_customers
cache group from the corresponding Oracle database tables:
LOAD CACHE GROUP new_customers WHERE (oratt.customer.cust_num >= 5000) COMMIT EVERY 256 ROWS;
REFRESH CACHE GROUP
UNLOAD CACHE GROUP
The MERGE
statement enables you to select rows from one or more sources for update or insertion into a target table. You can specify conditions that are used to evaluate which rows are updated or inserted into the target table.
Use this statement to combine multiple INSERT
and UPDATE
statements.
MERGE
is a deterministic statement: You cannot update the same row of the target table multiple times in the same MERGE
statement.
No privilege is required for the owner of the target table and the source table.
INSERT
or UPDATE
on a target table owned by another user and SELECT
on a source table owned by another user.
MERGE [hint] INTO [Owner.]TargetTableName [Alias] USING {[Owner.]SourceTableName|(Subquery)}[Alias] ON (Condtion) {MergeUpdateClause MergeInsertClause | MergeInsertClause MergeUpdateClause | MergeUpdateClause | MergeInsertClause }
The syntax for MergeUpdateClause
is as follows:
WHEN MATCHED THEN UPDATE SET SetClause [WHERE Condition1]
The syntax for MergeInsertClause
is as follows:
WHEN NOT MATCHED THEN INSERT [Columns [,...]] VALUES ( {{Expression | DEFAULT|NULL} [,...] }) [WHERE Condition2]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the MERGE statement. For more information on hints, see "Statement level optimizer hints". |
[ Owner .] TargetTableName |
Name of the target table. This is the table in which rows are either updated or inserted. |
[ Alias ] |
You can optionally specify an alias name for the target or source table. |
USING {[ Owner .] SourceTableName | ( Subquery )} [ Alias ] |
The USING clause indicates the table name or the subquery that is used for the source of the data. Use a subquery to use joins or aggregates. Optionally, you can specify an alias for the table name or the subquery. |
ON ( Condition ) |
Specify the condition used to evaluate each row of the target table to determine if the row should be considered for either a merge insert or a merge update. If the condition is true when evaluated, then the MergeUpdateClause is considered for the target row using the matching row from the SourceTableName . An error is generated if more than one row in the source table matches the same row in the target table. If the condition is not true when evaluated, then the MergeInsertClause is considered for that row. |
SET SetClause |
Clause used with the UPDATE statement. For information on the UPDATE statement, see "UPDATE". |
[ WHERE Condition1 ] |
For each row that matches the ON ( Condition ) , Condition1 is evaluated. If the condition is true when evaluated, the row is updated. You can refer to either the target table or the source table in this clause. You cannot use a subquery. The clause is optional. |
INSERT [ Columns [,...]]VALUES ({{ Expression |DEFAULT|NULL} [,...]}) |
Columns to insert into the target table. For more information on the INSERT statement, see "INSERT". |
[WHERE Condition2 ] |
If specified, Condition2 is evaluated. If the condition is true when evaluated, the row is inserted into the target table. The condition can refer to the source table only. You cannot use a subquery. |
You can specify the MergeUpdateClause
and MergeInsertClause
together or separately. If you specify both, they can be in either order.
If DUAL
is the only table specified in the USING
clause and it is not referenced elsewhere in the MERGE
statement, specify DUAL
as a simple table rather than using it in a subquery. In this simple case, to help performance, specify a key condition on a unique index of the target table in the ON
clause.
Restrictions on the MergeUpdateClause
:
You cannot update a column that is referenced in the ON
condition clause.
You cannot update source table columns.
Restrictions on the MergeInsertClause
:
You cannot insert values of target table columns.
Other restrictions:
In this example, dual
is specified as a simple table. There is a key condition on the UNIQUE
index of the target table specified in the ON
clause. The DuplicateBindMode
attribute is set to 1 in this example. (The default is 0.)
Command> CREATE TABLE mergedualex (col1 TT_INTEGER NOT NULL, > col2 TT_INTEGER, PRIMARY KEY (col1)); Command> MERGE INTO mergedualex USING dual ON (col1 = :v1) > WHEN MATCHED THEN UPDATE SET col2 = col2 + 1 > WHEN NOT MATCHED THEN INSERT VALUES (:v1, 1); Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 'V1' (TT_INTEGER) > 10 1 row merged. Command> SELECT * FROM mergedualex; < 10, 1 > 1 row found.
In this example, a table called contacts
is created with columns employee_id
and manager_id
. One row is inserted into contacts
with values 101 and NULL
for employee_id
and manager_id
, respectively. The MERGE
statement is used to insert rows into contacts
using the data in the employees
table. A SELECT FIRST 3
rows is used to illustrate that in the case where employee_id
is equal to 101, manager_id
is updated to 100. The remaining 106 rows from the employees
table are inserted into contacts
:
Command> CREATE TABLE contacts (employee_id NUMBER (6) NOT NULL PRIMARY KEY, > manager_id NUMBER (6)); Command> SELECT employee_id, manager_id FROM employees WHERE employee_id =101; < 101, 100 > 1 row found. Command> INSERT INTO contacts VALUES (101,null); 1 row inserted. Command> SELECT COUNT (*) FROM employees; < 107 > 1 row found. Command> MERGE INTO contacts c > USING employees e > ON (c.employee_id = e.employee_id) > WHEN MATCHED THEN > UPDATE SET c.manager_id = e.manager_id > WHEN NOT MATCHED THEN > INSERT (employee_id, manager_id) > VALUES (e.employee_id, e.manager_id); 107 rows merged. Command> SELECT COUNT (*) FROM contacts; < 107 > 1 row found. Command> SELECT FIRST 3 employee_id,manager_id FROM employees; < 100, <NULL> > < 101, 100 > < 102, 100 > 3 rows found. Command> SELECT FIRST 3 employee_id, manager_id FROM contacts; < 100, <NULL> > < 101, 100 > < 102, 100 > 3 rows found.
The REFRESH CACHE GROUP
statement replaces data in the TimesTen cache tables with the most current committed data from the Oracle database cached tables. The refresh operation is local and is not propagated across grid members.
CREATE SESSION
on the Oracle Database schema and SELECT
on the Oracle Database tables.
No privilege for the cache group is required for the cache group owner.
REFRESH CACHE GROUP
or REFRESH ANY CACHE GROUP
for another user's cache group.
REFRESH CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression] COMMIT EVERY n ROWS [PARALLEL NumThreads]
or
REFRESH CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList)
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the cache group. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
COMMIT EVERY n ROWS |
Use the COMMIT EVERY n ROWS clause to indicate the frequency (based on the number of rows that are refreshed in the cache group) at which a commit is issued during the refresh operation. This clause is required if you do not specify the WITH ID clause.
|
[PARALLEL NumThreads ] |
Provides parallel loading for cache group tables. Specifies the number of loading threads to run concurrently. One thread performs the bulk fetch from the Oracle database and the other threads (NumThreads - 1 threads) perform the inserts into TimesTen. Each thread uses its own connection or transaction.
The minimum value for |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to refresh the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
A REFRESH CACHE GROUP
statement must be executed in its own transaction.
Before issuing the REFRESH CACHE GROUP
statement, ensure that the replication agent is running if the cache group is replicated or is an AWT cache group. Make sure the cache agent is running.
The REFRESH
CACHE
GROUP
statement replaces data in the TimesTen cached tables with the most current committed data from the cached Oracle database tables, including data that already exists in the TimesTen cached tables. For an explicitly loaded cache group, a refresh operation is equivalent to issuing an UNLOAD CACHE GROUP statement followed by a LOAD CACHE GROUP statement. Operations on all rows in the Oracle database tables including inserts, updates, and deletes are applied to the cache tables. For dynamic cache groups, a refresh operation refreshes only rows that are updated or deleted on the Oracle database tables into the cache tables. For more information on explicitly loaded and dynamic cache groups, see Oracle In-Memory Database Cache User's Guide.
When refreshing a read-only cache group:
The AUTOREFRESH
statement must be paused.
If the cache group is a read-only dynamic cache group, do not use the PARALLEL
clause.
If the automatic refresh state of a cache group (dynamic or explicitly loaded) is PAUSED
, the state is changed to ON
after an unconditional REFRESH CACHE GROUP
statement issued on the cache group completes.
If the automatic refresh state of a dynamic cache group is PAUSED
, the state remains PAUSED
after a REFRESH CACHE GROUP...WITH ID
statement completes.
Generally, you do not have to fully qualify the column names in the WHERE
clause of the REFRESH CACHE GROUP
statement. However, since TimesTen automatically generates queries that join multiple tables in the same cache group, a column must be fully qualified if there is more than one table in the cache group that contains columns with the same name.
If the REFRESH CACHE GROUP
statement fails when you specify COMMIT EVERY
n
ROWS
(where n
is >= 0), the content of the target cache group could be in an inconsistent state since some loaded rows are already committed. Some cache instances may be partially loaded. Use the UNLOAD CACHE GROUP
statement to unload the cache group, then use the LOAD CACHE GROUP
statement to reload the cache group.
Following the execution of a REFRESH CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were refreshed.
Use the WITH ID
clause:
In place of the WHERE
clause for faster refreshing of the cache instance
To specify binding parameters
To roll back the refresh transaction upon failure
Do not specify the PARALLEL
clause:
With the WITH ID
clause
With the COMMIT
EVERY
n
ROWS
clause
When database level locking is enabled (connection attribute LockLevel
is set to 1)
for read-only dynamic cache groups
Do not use the WITH ID
clause when refreshing these types of cache groups:
Explicitly loaded read-only cache groups
Explicitly loaded user managed cache groups with the autorefresh attribute
User managed cache groups with the autorefresh and propagate attributes
Do not use the WITH ID
clause with the COMMIT EVERY
n
ROWS
clause.
Do not use the WHERE
clause with dynamic cache groups.
REFRESH CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Is equivalent to:
UNLOAD CACHE GROUP recreation.cache; LOAD CACHE GROUP recreation.cache COMMIT EVERY 30 ROWS;
Use the HR
schema to illustrate the use of the PARALLEL
clause with the REFRESH CACHE GROUP
statement. The COMMIT EVERY
n
ROWS
is required. Issue the CACHEGROUPS
command. You see cache group cg2
is defined and the autorefresh state is paused. Specify the REFRESH CACHE GROUP
statement with the PARALLEL
clause to provide parallel loading. You see 25 cache instances refreshed.
Command> CACHEGROUPS; Cache Group SAMPLEUSER.CG2: Cache Group Type: Read Only Autorefresh: Yes Autorefresh Mode: Incremental Autorefresh State: Paused Autorefresh Interval: 1.5 Minutes Root Table: SAMPLEUSER.COUNTRIES Table Type: Read Only Child Table: SAMPLEUSER.LOCATIONS Table Type: Read Only Child Table: SAMPLEUSER.DEPARTMENTS Table Type: Read Only 1 cache group found. Command> REFRESH CACHE GROUP cg2 COMMIT EVERY 20 ROWS PARALLEL 2; 25 cache instances affected.
ALTER CACHE GROUP
CREATE CACHE GROUP
DROP CACHE GROUP
FLUSH CACHE GROUP
LOAD CACHE GROUP
UNLOAD CACHE GROUP
The REFRESH MATERIALIZED VIEW
statement refreshes an asynchronous materialized view manually.
Required privilege on the materialized view log tables:
No privilege is required for the owner of the materialized view log tables.
SELECT ANY TABLE
if not the owner of materialized view log tables.
Required privilege on the materialized view:
No privilege is required for the owner of the materialized view.
SELECT ANY TABLE
if not the owner of materialized view.
REFRESH MATERIALIZED VIEW ViewName
Parameter | Description |
---|---|
ViewName |
Name of the asynchronous materialized view |
This statement refreshes the specified asynchronous materialized view. It is executed in a separate thread as a separate transaction and committed. The user transaction is not affected, but the user thread waits for the refresh operation to be completed before returning to the user. If you have not specified a refresh interval for an asynchronous materialized view, using this statement is the only way to refresh the view. If you have specified a refresh interval, you can still use this statement to refresh the view manually.
Since the refresh operation is always performed in a separate transaction, the refresh operation does not wait for any uncommitted user transactions to commit. Only the committed rows are considered for the refresh operation. This is true for the manual refresh statement as well as the automatic refresh that takes place at regular intervals.
If the CREATE MATERIALIZED VIEW
statement for the view specified a FAST
refresh, then the REFRESH MATERIALIZED VIEW
statement uses the incremental refresh method. Otherwise this statement uses the full refresh method.
REFRESH MATERALIZED VIEW bookorders;
CREATE MATERIALIZED VIEW
DROP [MATERIALIZED] VIEW
The REVOKE
statement removes one or more privileges from a user.
ADMIN
to revoke system privileges.
ADMIN
or object owner to revoke object privileges.
REVOKE {SystemPrivilege [,...] | ALL [PRIVILEGES]} FROM {User |PUBLIC} [,...] REVOKE {{ObjectPrivilege [,...] | ALL [PRIVILEGES]} ON {[Owner.Object}} [,...] FROM {user | PUBLIC}[,...]
The following parameters are for revoking system privileges:
Parameter | Description |
---|---|
SystemPrivilege |
See "System privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Revokes all system privileges from the user. |
User |
Name of the user from whom privileges are being revoked. The user name must first have been introduced to the TimesTen database by a CREATE USER statement. |
PUBLIC |
Specifies that the privilege is revoked for all users. |
The following parameters are for revoking object privileges:
Parameter | Description |
---|---|
ObjectPrivilege |
See "Object privileges" for a list of acceptable values. |
ALL [PRIVILEGES] |
Revokes all object privileges from the user. |
User |
Name of the user from whom privileges are to be revoked. The user name must first have been introduced to the TimesTen database through a CREATE USER statement. |
[ Owner .] Object |
Object is the name of the object on which privileges are being revoked. Owner is the owner of the object. If Owner is not specified, then the user who is revoking the privilege is known as the owner. |
PUBLIC |
Specifies that the privilege is revoked for all users. |
Privileges on objects cannot be revoked from the owner of the objects.
Any user who can grant a privilege can revoke the privilege even if they were not the user who originally granted the privilege.
Privileges must be revoked at the same level they were granted. You cannot revoke an object privilege from a user who has the associated system privilege. For example, if you grant SELECT ANY TABLE
to a user and then try to revoke SELECT ON BOB.TABLE1
, the revoke fails unless you have specifically granted SELECT ON BOB.TABLE1
in addition to SELECT ANY TABLE
.
If a user has been granted all system privileges, you can revoke a specific privilege. For example, you can revoke ALTER ANY TABLE
from a user who has been granted all system privileges.
If a user has been granted all object privileges, you can revoke a specific privilege on a specific object from the user. For example, you can revoke the DELETE
privilege on table CUSTOMERS
from user TERRY
even if TERRY
has previously been granted all object privileges.
You can revoke all privileges from a user even if the user has not previously been granted all privileges.
You cannot revoke a specific privilege from a user who has not been granted the privilege.
You cannot revoke privileges on objects owned by a user.
You cannot revoke system privileges and object privileges in the same statement.
You can specify only one object in an object privilege statement.
Revoking the SELECT
privilege on a detail table or a system privilege that includes the SELECT
privilege from user2
on a detail table owned by user1
causes associated materialized views owned by user2
to be marked invalid. See "Invalid materialized views".
When replication is configured, this statement is replicated.
Revoke the ADMIN
and DDL
privileges from the user terry
:
REVOKE admin, ddl FROM terry;
Assuming the revoker has ADMIN
privilege, revoke the UPDATE
privilege from terry
on the customers
table owned by pat
:
REVOKE update ON pat.customers FROM terry;
ALTER USER
CREATE USER
DROP USER
GRANT
Use the ROLLBACK
statement to undo work done in the current transaction.
None
ROLLBACK [WORK]
The ROLLBACK
statement enables the following optional keyword:
Parameter | Description |
---|---|
[WORK] |
Optional clause supported for compliance with the SQL standard. ROLLBACK and ROLLBACK WORK are equivalent. |
When the PassThrough
connection attribute is specified with a value greater than zero, the Oracle database transaction will also be rolled back.
A rollback closes all open cursors.
Insert a row into the regions
table of the HR
schema and then roll back the transaction. First set AUTOCOMMIT
to 0:
Command> SET AUTOCOMMIT 0; Command> INSERT INTO regions VALUES (5,'Australia'); 1 row inserted. Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > < 5, Australia > 5 rows found. Command> ROLLBACK; Command> SELECT * FROM regions; < 1, Europe > < 2, Americas > < 3, Asia > < 4, Middle East and Africa > 4 rows found.
COMMIT
The SELECT
statement retrieves data from one or more tables. The retrieved data is presented in the form of a table that is called the result table, result set, or query result.
No privilege is required for the object owner.
SELECT
for another user's object.
SELECT...FOR UPDATE
also requires UPDATE
privilege for another user's object.
The general syntax for a SELECT
statement is the following:
[WithClause] SELECT [hint][FIRST NumRows | ROWS m TO n] [ALL | DISTINCT] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition]] [ORDER BY OrderByClause [,...]] [FOR UPDATE [OF [[Owner.]TableName.]ColumnName [,...]] [NOWAIT | WAIT Seconds] ]
The syntax for a SELECT
statement that contains the set operators UNION
, UNION ALL
, MINUS
, or INTERSECT
is as follows:
SELECT [hint] [ROWS m TO n] [ALL] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition] [,...]] {UNION [ALL] | MINUS | INTERSECT} SELECT [ROWS m TO n] [ALL] SelectList FROM TableSpec [,...] [WHERE SearchCondition] [GROUP BY GroupByClause [,...] [HAVING SearchCondition [,...] ] ] [ORDER BY OrderByClause [,...] ]
The syntax for OrderByClause
is as follows:
{ColumnID|ColumnAlias|Expression} [ASC|DESC] [NULLS { FIRST|LAST }]
Parameter | Description |
---|---|
[ WithClause ] |
The WITH clause, also known as subquery factoring, enables you to assign a name to a subquery block, which can subsequently be referenced multiple times within the top-level SELECT statement. The syntax of the WITH clause is presented under "WithClause". |
hint |
Specifies a statement level optimizer hint for the SELECT statement. For more information on hints, see "Statement level optimizer hints". |
FIRST NumRows |
Specifies the number of rows to retrieve. NumRows must be either a positive INTEGER value or a dynamic parameter placeholder. The syntax for a dynamic parameter placeholder is either ? or :DynamicParameter . The value of the dynamic parameter is supplied when the statement is executed. |
ROWS m TO n |
Specifies the range of rows to retrieve where m is the first row to be selected and n is the last row to be selected. Row counting starts at row 1. The query SELECT ROWS 1 TO n returns the same rows as SELECT FIRST NumRows assuming the queries are ordered and n and NumRows have the same value.
Use either a positive |
ALL |
Prevents elimination of duplicate rows from the query result. If neither ALL nor DISTINCT is specified, ALL is the default. |
DISTINCT |
Ensures that each row in the query result is unique. All NULL values are considered equal for this comparison. Duplicate rows are not evaluated.
You cannot use |
SelectList |
Specifies how the columns of the query result are to be derived. The syntax of select list is presented under "SelectList". |
FROM TableSpec |
Identifies the tables referenced in the SELECT statement. The maximum number of tables per query is 24.
|
WHERE SearchCondition |
The WHERE clause determines the set of rows to be retrieved. Normally, rows for which SearchCondition is FALSE or NULL are excluded from processing, but SearchCondition can be used to specify an outer join in which rows from an outer table that do not have SearchCondition evaluated to TRUE with respect to any rows from the associated inner table are also returned, with projected expressions referencing the inner table set to NULL .
The unary (+) operator may follow some column and See Chapter 5, "Search Conditions" for more information on search conditions. |
GROUP BY GroupByClause [,...] |
The GROUP BY clause identifies one or more expressions to be used for grouping when aggregate functions are specified in the select list and when you want to apply the function to groups of rows. The syntax and description for the GROUP BY clause is described in "GROUP BY clause". |
HAVING SearchCondition |
The HAVING clause can be used in a SELECT statement to filter groups of an aggregate result. The existence of a HAVING clause in a SELECT statement turns the query into an aggregate query. All columns referenced outside the sources of aggregate functions in any clause except the WHERE clause must be included in the GROUP BY clause.
Subqueries can be specified in the |
(+) |
A simple join (also called an inner join) returns a row for each pair of rows from the joined tables that satisfy the join condition specified in SearchCondition . Outer joins are an extension of this operator in which all rows of the outer table are returned, whether or not matching rows from the joined inner table are found. In the case no matching rows are found, any projected expressions referencing the inner table are given the value NULL . |
ORDER BY OrderByClause [,...] |
Sorts the query result rows in order by specified columns or expressions. Specify the sort key columns in order from major sort key to minor sort key.
The |
ColumnID |
Must correspond to a column in the select list. You can identify a column to be sorted by specifying its name or its ordinal number. The first column in the select list is column number 1. It is better to use a column number when referring to columns in the select list if they are not simple columns. Some examples are aggregate functions, arithmetic expressions, and constants.
A
|
ColumnAlias |
Used in an ORDER BY clause, the column alias must correspond to a column in the select list. The same alias can identify multiple columns.
|
ASC|DESC |
For each column designated in the ORDER BY clause, you can specify whether the sort order is to be ascending or descending. If neither ASC (ascending) nor DESC (descending) is specified, ascending order is used. All character data types are sorted according to the current value of the NLS_SORT session parameter. |
NULLS { FIRST | LAST } |
Valid with ORDER BY clause and is optional. If you specify ASC or DESC , NULLS FIRST or NULLS LAST must follow ASC or DESC .
Specify If you specify the |
FOR UPDATE
|
FOR UPDATE
|
SelectQuery1
|
Specifies that the results of SelectQuery1 and SelectQuery2 are to be combined, where SelectQuery1 and SelectQuery2 are general SELECT statements with some restrictions.
The The The The data type of corresponding selected entries in both The length of a column in the result is the longer length of correspondent selected values for the column. The column names of the final result are the column names of the leftmost select. You can combine multiple queries using the set operators One or both operands of a set operator can be a set operator. Multiple or nested set operators are evaluated from left to right. The set operators can be mixed in the same query. Restrictions on the
|
When you use a correlation name, the correlation name must conform to the syntax rules for a basic name. (See "Basic names".) All correlation names within one SELECT
statement must be unique. Correlation names are useful when you join a table to itself. Define multiple correlation names for the table in the FROM
clause and use the correlation names in the select list and the WHERE
clause to qualify columns from that table. See "TableSpec" for more information about correlation names.
SELECT...FOR UPDATE
is supported in a SELECT
statement that specifies a subquery, but it can be specified only in the outermost query.
If your query specifies either FIRST
NumRows
or ROWS
m
TO
n
, ROWNUM
may not be used to restrict the number of rows returned.
FIRST
NumRows
and ROWS
m
TO
n
cannot be used together in the same SELECT
statement.
Use the SELECT...INTO
statement in PL/SQL. If you use the SELECT...INTO
statement outside of PL/SQL, TimesTen accepts, but silently ignores, the syntax.
This example shows the use of a column alias (max_salary
) in the SELECT
statement:
SELECT MAX(salary) AS max_salary FROM employees WHERE employees.hire_date > '2000-01-01 00:00:00'; < 10500 > 1 row found.
This example uses two tables, orders
and lineitems
.
The orders
table and lineitems
table are created as follows:
CREATE TABLE orders(orderno INTEGER, orderdate DATE, customer CHAR(20)); CREATE TABLE lineitems(orderno INTEGER, lineno INTEGER, qty INTEGER, unitprice DECIMAL(10,2));
Thus for each order, there is one record in the orders
table and a record for each line of the order in lineitems
.
To find the total value of all orders entered since the beginning of the year, use the HAVING
clause to select only those orders that were entered on or after January 1, 2000:
SELECT o.orderno, customer, orderdate, SUM(qty * unitprice) FROM orders o, lineitems l WHERE o.orderno=l.orderno GROUP BY o.orderno, customer, orderdate HAVING orderdate >= DATE '2000-01-01';
Consider this query:
SELECT * FROM tablea, tableb WHERE tablea.column1 = tableb.column1 AND tableb.column2 > 5 FOR UPDATE;
The query locks all rows in tablea
where:
The value of tablea
.column1
equals at least one tableb
.column1
value where tableb
.column2
is greater than 5.
The query also locks all rows in tableb
where:
The value of tableb
.column2
is greater than 5.
The value of tableb
.column1
equals at least one tablea
.column1
value.
If no WHERE
clause is specified, all rows in both tables are locked.
This example demonstrates the (+) join operator:
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+);
The following query returns an error because an outer join condition cannot be connected by OR
.
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) OR t3.y = 5;
The following query is valid:
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND (t3.y = 4 OR t3.y = 5);
A condition cannot use the IN
operator to compare a column marked with (+). For example, the following query returns an error.
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND t2.y(+) IN (4,5);
The following query is valid:
SELECT * FROM t1, t2, t3 WHERE t1.x = t2.x(+) AND t1.y IN (4,5);
The following query results in an inner join. The condition without the (+) operator is treated as an inner join condition.
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+) AND t1.y = t2.y;
In the following query, the WHERE
clause contains a condition that compares an inner table column of an outer join with a constant. The (+) operator is not specified and hence the condition is treated as an inner join condition.
SELECT * FROM t1, t2 WHERE t1.x = t2.x(+) AND t2.y = 3;
For more join examples, see "JoinedTable".
The following example returns the current sequence value in the student
table.
SELECT SEQ.CURRVAL FROM student;
The following query produces a derived table because it contains a SELECT
statement in the FROM
clause.
SELECT * FROM t1, (SELECT MAX(x2) maxx2 FROM t2) tab2 WHERE t1.x1 = tab2.maxx2;
The following query joins the results of two SELECT
statements.
SELECT * FROM t1 WHERE x1 IN (SELECT x2 FROM t2) UNION SELECT * FROM t1 WHERE x1 IN (SELECT x3 FROM t3);
In the following, select all orders that have the same price as the highest price in their category.
SELECT * FROM orders WHERE price = (SELECT MAX(price) FROM stock WHERE stock.cat=orders.cat);
The next example illustrates the use of the INTERSECT
set operator. There is a department_id
value in the employees
table that is NULL
. In the departments
table, the department_id
is defined as a NOT NULL
primary key. The rows returned from using the INTERSECT
set operator do not include the row in the departments
table whose department_id
value is NULL
.
Command> SELECT department_id FROM employees INTERSECT SELECT department_id > FROM departments; < 10 > < 20 > < 30 > < 40 > < 50 > < 60 > < 70 > < 80 > < 90 > < 100 > < 110 > 11 rows found. Command> SELECT DISTINCT department_id FROM employees; < 10 > < 20 > < 30 > < 40 > < 50 > < 60 > < 70 > < 80 > < 90 > < 100 > < 110 > < <NULL> > 12 rows found.
The next example illustrates the use of the MINUS
set operator by combining rows returned by the first query but not the second. The row containing the NULL
department_id
value in the employees
table is the only row returned.
Command> SELECT department_id FROM employees > MINUS SELECT department_id FROM departments; < <NULL> > 1 row found.
The following example illustrates the use of the SUBSTR
expression in a GROUP BY
clause and the use of a subquery in a HAVING
clause. The first 10 rows are returned.
Command> SELECT ROWS 1 TO 10 SUBSTR (job_id, 4,10), department_id, manager_id, > SUM (salary) FROM employees > GROUP BY SUBSTR (job_id,4,10),department_id, manager_id > HAVING (department_id, manager_id) IN > (SELECT department_id, manager_id FROM employees x > WHERE x.department_id = employees.department_id) > ORDER BY SUBSTR (job_id, 4,10),department_id,manager_id; < ACCOUNT, 100, 108, 39600 > < ACCOUNT, 110, 205, 8300 > < ASST, 10, 101, 4400 > < CLERK, 30, 114, 13900 > < CLERK, 50, 120, 22100 > < CLERK, 50, 121, 25400 > < CLERK, 50, 122, 23600 > < CLERK, 50, 123, 25900 > < CLERK, 50, 124, 23000 > < MAN, 20, 100, 13000 > 10 rows found.
The following example locks the employees
table for update and waits 10 seconds for the lock to be available. An error is returned if the lock is not acquired in 10 seconds. The first five rows are selected.
Command> SELECT FIRST 5 last_name FROM employees FOR UPDATE WAIT 10; < King > < Kochhar > < De Haan > < Hunold > < Ernst > 5 rows found.
The next example locks the departments
table for update. If the selected rows are locked by another process, an error is returned if the lock is not available. This is because NOWAIT
is specified.
Command> SELECT FIRST 5 last_name e FROM employees e, departments d > WHERE e.department_id = d.department_id > FOR UPDATE OF d.department_id NOWAIT; < Whalen > < Hartstein > < Fay > < Raphaely > < Khoo > 5 rows found.
In the following, use the HR
schema to illustrate the use of a subquery with the FOR UPDATE
clause.
Command> SELECT employee_id, job_id FROM job_history > WHERE (employee_id, job_id) NOT IN (SELECT employee_id, job_id > FROM employees) > FOR UPDATE; < 101, AC_ACCOUNT > < 101, AC_MGR > < 102, IT_PROG > < 114, ST_CLERK > < 122, ST_CLERK > < 176, SA_MAN > < 200, AC_ACCOUNT > < 201, MK_REP > 8 rows found.
In the following, use a dynamic parameter placeholder for SELECT ROWS
m
TO
n
and SELECT FIRST
.
Command> SELECT ROWS ? TO ? employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 1 Enter Parameter 2 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found. Command> SELECT ROWS :a TO :b employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 1 Enter Parameter 2 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found. Command> SELECT FIRST ? employee_id FROM employees; Type '?' for help on entering parameter values. Type '*' to end prompting and abort the command. Type '-' to leave the parameter unbound. Type '/;' to leave the remaining parameters unbound and execute the command. Enter Parameter 1 (TT_INTEGER) > 3 < 100 > < 101 > < 102 > 3 rows found.
The following example illustrates the use of NULLS LAST
in the ORDER BY
clause. Query the employees
table to find employees with a commission percentage greater than .30 or a commission percentage that is NULL. Select the first seven employees and order by commission_pct
and last_name
. Order commision_pct
in descending order and use NULLS LAST
to display rows with NULL values last in the query. Output commission_pct
and last_name
.
Command> SELECT FIRST 7 commission_pct,last_name FROM employees where commission_pct > .30 OR commission_pct IS NULL ORDER BY commission_pct DESC NULLS LAST,last_name; < .4, Russell > < .35, King > < .35, McEwen > < .35, Sully > < <NULL>, Atkinson > < <NULL>, Austin > < <NULL>, Baer > 7 rows found.
WithClause
has the following syntax:
WITH QueryName AS ( Subquery ) [, QueryName AS ( Subquery )] ...
WithClause
has the following parameter:
Parameter | Description |
---|---|
QueryName AS (Subquery ) |
Specifies an alias for a subquery that can be used multiple times within the SELECT statement. |
Subquery factoring provides the WITH
clause that enables you to assign a name to a subquery block, which can subsequently be referenced multiple times within the main SELECT
query. The query name is visible to the main query and any subquery contained in the main query.
The WITH
clause can only be defined as a prefix to the main SELECT
statement.
Subquery factoring is useful in simplifying complex queries that use duplicate or complex subquery blocks in one or more places. In addition, TimesTen uses subquery factoring to optimize the query by evaluating and materializing the subquery block once and providing the result for each reference in the SELECT
statement.
You can specify the set operators: UNION
, MINUS
, INTERSECT
in the main query.
Restrictions using the WITH
clause:
Do not use the WITH
clause in a view or materialized view definition.
Do not use the WITH
clause in global queries.
Recursive subquery factoring is not supported.
Do not use the WITH
clause in subqueries or derived tables.
You cannot provide a column parameter list for the query alias. For example, TimesTen does not support: WITH
w1
(
c1
,
c2
)
AS
...
The following example creates the query names dept_costs
and avg_cost
for the initial query block, then uses these names in the body of the main query.
Command> WITH dept_costs AS ( > SELECT department_name, SUM(salary) dept_total > FROM employees e, departments d > WHERE e.department_id = d.department_id > GROUP BY department_name), > avg_cost AS ( > SELECT SUM(dept_total)/COUNT(*) avg > FROM dept_costs) > SELECT * FROM dept_costs > WHERE dept_total > > (SELECT avg FROM avg_cost) > ORDER BY department_name; > DEPARTMENT_NAME DEPT_TOTAL ------------------------------- Sales 304500 Shipping 156400
The SelectList
parameter of the SELECT
statement has the following syntax:
{* | [Owner.]TableName.* | { Expression | [[Owner.]TableName.]ColumnName | [[Owner.]TableName.]ROWID | NULL } [[AS] ColumnAlias] } [,...]
The SelectList
parameter of the SELECT
statement has the following parameters:
Parameter | Description |
---|---|
* |
Includes, as columns of the query result, all columns of all tables specified in the FROM clause. |
[ Owner .] TableName .* |
Includes all columns of the specified table in the result. |
Expression |
An aggregate query includes a GROUP BY clause or an aggregate function.
When the select list is not an aggregate query, the column reference must reference a table in the A column reference in the select list of an aggregate query must reference a column list in the |
[[ Owner .] Table .] ColumnName |
Includes a particular column from the named owner's indicated table. You can also specify the CURRVAL or NEXTVAL column of a sequence. See "Incrementing SEQUENCE values with CURRVAL and NEXTVAL" for more details. |
[[ Owner .] Table .] ROWID |
Includes the ROWID pseudocolumn from the named owner's indicated table. |
NULL |
When NULL is specified, the default for the resulting data type is VARCHAR(0) . You can use the CAST function to convert the result to a different data type. NULL can be specified in the ORDER BY clause. |
ColumnAlias |
Used in an ORDER BY clause, the column alias must correspond to a column in the select list. The same alias can identify multiple columns.
|
The clauses must be specified in the order given in the syntax.
TimesTen does not support subqueries in the select list.
A result column in the select list can be derived in any of the following ways.
A result column can be taken directly from one of the tables listed in the FROM
clause.
Values in a result column can be computed, using an arithmetic expression, from values in a specified column of a table listed in the FROM
clause.
Values in several columns of a single table can be combined in an arithmetic expression to produce the result column values.
Aggregate functions (AVG
, MAX
, MIN
, SUM
, and COUNT
) can be used to compute result column values over groups of rows. Aggregate functions can be used alone or in an expression. You can specify aggregate functions containing the DISTINCT
qualifier that operate on different columns in the same table. If the GROUP BY
clause is not specified, the function is applied over all rows that satisfy the query. If the GROUP BY
clause is specified, the function is applied once for each group defined by the GROUP BY
clause. When you use aggregate functions with the GROUP BY
clause, the select list can contain aggregate functions, arithmetic expressions, and columns in the GROUP BY
clause. For more details on the GROUP BY
clause, see "GROUP BY clause".
A result column containing a fixed value can be created by specifying a constant or an expression involving only constants.
In addition to specifying how the result columns are derived, the select list also controls their relative position from left to right in the query result. The first result column specified by the select list becomes the leftmost column in the query result, and so on.
Result columns in the select list are numbered from left to right. The leftmost column is number 1. Result columns can be referred to by column number in the ORDER BY
clause. This is especially useful to refer to a column defined by an arithmetic expression or an aggregate.
To join a table with itself, define multiple correlation names for the table in the FROM
clause and use the correlation names in the select list and the WHERE
clause to qualify columns from that table.
When you use the GROUP BY
clause, one answer is returned per group in accordance with the select list, as follows:
The WHERE
clause eliminates rows before groups are formed.
The GROUP BY
clause groups the resulting rows. See "GROUP BY clause" for more details.
The select list aggregate functions are computed for each group.
In the following example, one value, the average number of days you wait for a part, is returned:
SELECT AVG(deliverydays) FROM purchasing.supplyprice;
The part number and delivery time for all parts that take fewer than 20 days to deliver are returned by the following statement.
SELECT partnumber, deliverydays FROM purchasing.supplyprice WHERE deliverydays < 20;
Multiple rows may be returned for a single part.
The part number and average price of each part are returned by the following statement.
SELECT partnumber, AVG(unitprice) FROM purchasing.supplyprice GROUP BY partnumber;
In the following example, the join returns names and locations of California suppliers. Rows are returned in ascending order by partnumber
values. Rows containing duplicate part numbers are returned in ascending order by vendorname
values. The FROM
clause defines two correlation names (v
and s
), which are used in both the select list and the WHERE
clause. The vendornumber
column is the only common column between vendors
and supplyprice
.
SELECT partnumber, vendorname, s.vendornumber,vendorcity FROM purchasing.supplyprice s, purchasing.vendors v WHERE s.vendornumber = v.vendornumber AND vendorstate = 'CA' ORDER BY partnumber, vendorname;
The following query joins table purchasing.parts
to itself to determine which parts have the same sales price as the part whose serial number is '1133-P-01
'.
SELECT q.partnumber, q.salesprice FROM purchasing.parts p, purchasing.parts q WHERE p.salesprice = q.salesprice AND p.serialnumber = '1133-P-01';
The next example shows how to retrieve the rowid of a specific row. The retrieved rowid value can be used later for another SELECT
, DELETE
, or UPDATE
statement.
SELECT rowid FROM purchasing.vendors WHERE vendornumber = 123;
The following example shows how to use a column alias to retrieve data from the table employees
.
SELECT MAX(salary) AS max_salary FROM employees;
The TableSpec
parameter of the SELECT
statement has the following syntax:
{[Owner.]TableName [CorrelationName] | JoinedTable | DerivedTable}
A simple table specification has the following syntax:
[Owner.]TableName
The TableSpec
parameter of the SELECT
statement has the following parameters:
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies a table to be referenced. |
CorrelationName |
CorrelationName specifies an alias for the immediately preceding table. When accessing columns of that table elsewhere in the SELECT statement, use the correlation name instead of the actual table name within the statement. The scope of the correlation name is the SQL statement in which it is used. The correlation name must conform to the syntax rules for a basic name. See "Basic names".
All correlation names within one statement must be unique. |
JoinedTable |
Specifies the query that defines the table join. The syntax of JoinedTable is presented under "JoinedTable". |
DerivedTable |
Specifies a table derived from the evaluation of a SELECT statement. No FIRST NumRows or ROWS m TO n clauses are allowed in this SELECT statement. The syntax of DerivedTable is presented under "DerivedTable". |
The JoinedTable
parameter specifies a table derived from CROSS JOIN
, INNER JOIN
, LEFT OUTER JOIN
or RIGHT OUTER JOIN
.
The syntax for JoinedTable
is as follows:
{CrossJoin | QualifiedJoin}
Where CrossJoin
is:
TableSpec1 CROSS JOIN TableSpec2
And QualifiedJoin
is:
TableSpec1 [JoinType] JOIN TableSpec2 ON SearchCondition
In the QualifiedJoin
parameter, JoinType
syntax is as follows:
{INNER | LEFT [OUTER] | RIGHT [OUTER]}
The JoinedTable
parameter of the TableSpec
clause of a SELECT
statement has the following parameters:
FULL OUTER JOIN
is not supported.
A joined table can be used to replace a table in a FROM
clause anywhere except in a statement that defines a materialized view. Thus, a joined table can be used in UNION
, MINUS
, INTERSECT
, a subquery, a nonmaterialized view, or a derived table.
A subquery cannot be specified in the operand of a joined table. For example, the following statement is not supported:
SELECT * FROM regions INNER JOIN (SELECT * FROM countries) table2 ON regions.region_id=table2.region_id;
A view can be specified as an operand of a joined table.
A temporary table cannot be specified as an operand of a joined table.
OUTER JOIN
can be specified in two ways, either using the (+) operator in SearchCondition
of the WHERE
clause or using a JOIN
table operation. The two specification methods cannot coexist in the same statement.
Join order and grouping can be specified with a JoinedTable
operation, but they cannot be specified with the (+
) operator. For example, the following operation cannot be specified with the (+
) operator:
t LEFT JOIN (t2 INNER JOIN t3 ON x2=x3) ON (x1 = x2 - x3)
These examples use the regions
and countries
tables from the HR
schema.
The following performs a left outer join.
SELECT * FROM regions LEFT JOIN countries ON regions.region_id=countries.region_id WHERE regions.region_id=3; < 3, Asia, JP, Japan, 3 > < 3, Asia, CN, China, 3 > < 3, Asia, IN, India, 3 > < 3, Asia, AU, Australia, 3 > < 3, Asia, SG, Singapore, 3 > < 3, Asia, HK, HongKong, 3 > 6 rows found.
You can also perform a left outer join with the (+) operator, as follows.
SELECT * FROM regions, countries WHERE regions.region_id=countries.region_id (+) AND regions.region_id=3;
For more examples of joins specified with the (+
) operator, see "Examples".
The following performs a right outer join.
SELECT * FROM regions RIGHT JOIN countries ON regions.region_id=wountries.region_id WHERE regions.region_id=3; < AU, Australia, 3, 3, Asia > < CN, China, 3, 3, Asia > < HK, HongKong, 3, 3, Asia > < IN, India, 3, 3, Asia > < JP, Japan, 3, 3, Asia > < SG, Singapore, 3, 3, Asia > 6 rows found.
The next example performs a right outer join with the (+) operator.
SELECT * FROM countries, regions WHERE regions.region_id (+)=countries.region_id AND countries.region_id=3; < JP, Japan, 3, 3, Asia > < CN, China, 3, 3, Asia > < IN, India, 3, 3, Asia > < AU, Australia, 3, 3, Asia > < SG, Singapore, 3, 3, Asia > < HK, HongKong, 3, 3, Asia > 6 rows found.
Note that the right join methods produce the same rows but in a different display order. There should be no expectation of row order for join results.
The following performs an inner join.
SELECT * FROM regions INNER JOIN countries ON regions.region_id=countries.region_id WHERE regions.region_id=2; < 2, Americas, US, United States of America, 2 > < 2, Americas, CA, Canada, 2 > < 2, Americas, BR, Brazil, 2 > < 2, Americas, MX, Mexico, 2 > < 2, Americas, AR, Argentina, 2 > 5 rows found.
The next example performs a cross join.
SELECT * FROM regions CROSS JOIN countries WHERE regions.region_id=1; < 1, Europe, AR, Argentina, 2 > < 1, Europe, AU, Australia, 3 > < 1, Europe, BE, Belgium, 1 > < 1, Europe, BR, Brazil, 2 > ... < 1, Europe, SG, Singapore, 3 > < 1, Europe, UK, United Kingdom, 1 > < 1, Europe, US, United States of America, 2 > < 1, Europe, ZM, Zambia, 4 > < 1, Europe, ZW, Zimbabwe, 4 > 25 rows found.
CREATE TABLE
INSERT
INSERT...SELECT
UPDATE
A derived table is the result of a SELECT
statement in the FROM
clause, with an alias.
The syntax for DerivedTable
is as follows:
(Subquery) [CorrelationName]
The DerivedTable
parameter of the TableSpec
clause of a SELECT
statement has the following parameters:
Parameter | Description |
---|---|
Subquery |
For information on subqueries, see "Subqueries". |
CorrelationName |
Optionally use CorrelationName to specify an alias for the derived table. It must be different from any table name referenced in the query. |
When using a derived table, these restrictions apply:
The DUAL
table can be used in a SELECT
statement that references no other tables, but needs to return at least one row. Selecting from DUAL
is useful for computing a constant expression (an expression that is evaluated to a constant value) with the SELECT
statement. Because DUAL
has only one row, the constant is returned only once.
Subquery
cannot refer to a column from another derived table.
A derived table cannot be used as a source of a joined table.
A derived table cannot be used as a target of a DELETE
or UPDATE
statement.
Specify the GROUP BY
clause if you want the database to group the selected rows based on the value of expressions for each row and return a single row of summary information for each group. If the GROUP BY
clause is omitted, the entire query result is treated as one group. If this clause contains CUBE
or ROLLUP
, the results contain superaggregate groupings in addition to the regular groupings.
The expressions in the GROUP BY
clause can do the following:
Designate single or multiple columns.
Include arithmetic operations, the ROWID
pseudocolumn, or NULL.
Include a date, a constant, or a dynamic parameter.
Include ROLLUP
or CUBE
clauses, where the results produce superaggregate groupings in addition to the regular groupings. Superaggregate groupings are calculated subtotals and totals returned with the regular groupings in the GROUP BY
clause.
Include GROUPING SETS
clause to distinguish which superaggregate groupings to produce.
When you use the GROUP BY
clause, the select list can contain only aggregate functions and columns referenced in the GROUP BY
clause. If the select list contains the construct *
, TableName.
*
, or Owner.TableName
.*
, the GROUP BY
clause must contain all columns that the *
includes. NULL values are considered equivalent in grouping rows. If all other columns are equal, all NULL values in a column are placed in a single group.
Note:
To identify and potentially eliminate NULL groupings from the superaggregate groupings, use theGROUPING
function, as described in "GROUPING".The general syntax for the GROUP BY
clause is the following:
GROUP BY {Expression | RollupCubeClause | GroupingSetsClause }[,...] GroupingSetsClause::= GROUPING SETS GroupingExpressionList | RollupCubeClause [,...] RollupCubeClause { ROLLUP | CUBE } ( GroupingExpressionList ) } GroupingExpressionList::= { Expression | ExpressionList [, { Expression | ExpressionList } ] ...} ExpressionList :: = ( Expression [, Expression ] ...)
Parameter | Description |
---|---|
Expression |
Valid expression syntax. See Chapter 3, "Expressions". |
RollupCubeClause |
The GROUP BY clause may include one or more ROLLUP or CUBE clauses. |
GroupingSetsClause |
The GROUP BY clause may include one or more GROUPING SETS clauses. The GROUPING SETS clause enables you to explicitly specify which groupings of data that the database returns. For more information, see "GROUPING SETS". |
GroupingExpressionList |
The GROUP BY clause can contain multiple expressions or expression lists. |
ROLLUP GroupingExpressionList |
The ROLLUP clause is used to generate superaggregate rows from groups. For more information, see "ROLLUP". |
CUBE GroupingExpressionList |
The CUBE clause groups selected rows based on the values of all possible combinations of the grouping columns in the CUBE clause. For more information, see "CUBE". |
ExpressionList |
A list of one or more expressions, each separated by a comma. |
The following GROUP BY
example sums the salaries for employees in the employees
table and uses the SUBSTR
expression to group the data by job function.
Command> SELECT SUBSTR (job_id, 4,10), SUM (salary) FROM employees > GROUP BY SUBSTR (job_id,4,10); < PRES, 24000 > < VP, 34000 > < PROG, 28800 > < MGR, 24000 > < ACCOUNT, 47900 > < MAN, 121400 > < CLERK, 133900 > < REP, 273000 > < ASST, 4400 > 9 rows found.
Query emp_details_view
to select the first 10 departments and managers within the department and count the number of employees in the department with the same manager. Use the GROUP BY
clause to group the result by department and manager.
Command> columnlabels on; Command> SELECT first 10 department_id AS DEPT, manager_id AS MGR, > COUNT(employee_id) AS NUM_EMP > FROM emp_details_view > GROUP BY (department_id, manager_id) > ORDER BY department_id, manager_id; DEPT, MGR, NUM_EMP < 10, 101, 1 > < 20, 100, 1 > < 20, 201, 1 > < 30, 100, 1 > < 30, 114, 5 > < 40, 101, 1 > < 50, 100, 5 > < 50, 120, 8 > < 50, 121, 8 > < 50, 122, 8 > 10 rows found.
The following definitions describe how columns can be grouped within the ROLLUP
, CUBE
or GROUPING SETS
clauses:
Grouping column: A single column used in a GROUP BY
clause. For example, in the following GROUP BY
clause, X
, Y
, and Z
are group columns.
GROUP BY X, GROUPING SETS(Y, Z)
Composite Column: A list of grouping columns inside parentheses. For example, in the following clause, (C1, C2)
and (C3, C4)
are composite columns.
GROUP BY ROLLUP( (C1,C2), (C3,C4), C5);
Grouping: Grouping is a single level of aggregation from within a grouping set. For example, in the following statement, (C1)
and (C2, C3)
are individual groupings.
GROUP BY GROUPING SETS(C1, (C2,C3));
Grouping Set: A collection of groupings inside parentheses. For example, in the following statement, (C1, (C2, C3))
and (C2, (C4, C5))
are two individual grouping sets.
GROUP BY GROUPING SETS(C1, (C2,C3)), GROUPING SETS(C2, (C4, C5));
Concatenated grouping sets: Separate multiple grouping sets with commas. The result is a cross-product of groupings from each grouping set.
Grand Total or Empty set column: A grand total or empty set grouping computes aggregation by considering all rows as one group. Grand totals are automatically provided in the results for ROLLUP
and CUBE
clauses; however, you request the grand total in the GROUPING SETS
clause by providing empty parentheses, ( )
.
Duplicate grouping columns can be used in ROLLUP
, CUBE
or GROUPING SETS
. However, it does result in duplicated result rows.
Restrictions for ROLLUP
, CUBE
and GROUPING SETS
clauses are as follows:
These clauses are not supported within a materialized view definition.
These clauses are not supported for global queries across a cache grid.
The following sections describe the ROLLUP
, CUBE
and GROUPING SETS
clauses:
The GROUPING SETS
clause enables you to explicitly specify which groupings of data that the database returns. You specify only the desired groups by enclosing them within parentheses, so the database only generates the superaggregate summaries in which you are interested.
The following statement produces three groups: one group returns results for each gender and year columns, a second for a summary superaggregate for each of the months and the last result for the grand total.
SELECT GENDER, YEAR, MONTH, SUM (NUM_OF_STUDENTS) AS TOTAL FROM INSTRUCTOR_SUMMARY GROUP BY GROUPING SETS ((GENDER, YEAR), -- 1ST GROUP (MONTH), -- 2ND GROUP ()); -- 3RD GROUP
You can combine multiple GROUPING SETS
to generate specific combinations between the multiple GROUPING SETS
. The following statement contains two GROUPING SETS
clauses:
GROUP BY GROUPING SETS (YEAR, MONTH), GROUPING SETS (WEEK, DAY);
This is equivalent to the following GROUPING SETS
statement:
GROUP BY GROUPING SETS (YEAR, WEEK), (YEAR, DAY), (MONTH, WEEK), (MONTH, DAY);
When a GROUP BY
clause has both regular grouping columns and a GROUPING SETS
clause, the results are grouped by the regular grouping column as follows:
GROUP BY a, b GROUPING SETS(c, d);
This is equivalent to the following:
GROUP BY GROUPING SETS((a, b, c), (a, b, d));
Example
The following example specifies the grouping sets of (region_name
, country_name
), state_province
, and grand totals.
Command> SELECT region_name AS Region, > country_name AS Country, > state_province AS State, > COUNT(employee_id) AS "Total Emp" > FROM regions r, countries c, locations l, departments d, employees e > WHERE r.region_id = c.region_id AND > l.country_id = c.country_id AND > d.location_id = l.location_id AND > d.department_id = e.department_id > GROUP BY grouping sets((region_name, country_name), state_province, ()) > ORDER BY region_name, state_province; REGION, COUNTRY, STATE, TOTAL EMP < Americas, Canada, <NULL>, 2 > < Americas, United States of America, <NULL>, 68 > < Europe, Germany, <NULL>, 1 > < Europe, United Kingdom, <NULL>, 35 > < <NULL>, <NULL>, Bavaria, 1 > < <NULL>, <NULL>, California, 45 > < <NULL>, <NULL>, Ontario, 2 > < <NULL>, <NULL>, Oxford, 34 > < <NULL>, <NULL>, Texas, 5 > < <NULL>, <NULL>, Washington, 18 > < <NULL>, <NULL>, <NULL>, 106 > < <NULL>, <NULL>, <NULL>, 1 > 12 rows found.
ROLLUP
is used within the GROUP BY
clause. When used with SUM
, ROLLUP
generates subtotals from most detailed level (all columns specified in the ROLLUP
clause) to the grand total level, by removing one column at each level. These are known as superaggregate rows.
The ROLLUP
clause returns the following:
Regular aggregate rows that would be produced by GROUP BY
without using ROLLUP
.
Subtotals following the grouping list specified in the ROLLUP
clause. ROLLUP
takes as its argument an ordered list of grouping columns. Each subtotal is created for the ordered list of grouping columns dropping the right-most grouping column until it reaches the grand total. For instance, if you specify GROUP BY ROLLUP(x, y, z)
, the returned superaggregate groups would be as follows: (x,y,z), (x,y), (x), ( )
.
The number of subtotals created is n
+1 aggregate levels, where n
is the number of grouping columns. For example, three expressions (n
=3) in the ROLLUP
clause results in n
+1 = 3+1 = 4 groupings.
Grand total row.
You can group columns using composite columns inside parentheses. For example, in the following statement:
GROUP BY ROLLUP( (a, b), (c, d), e);
The (a, b
) and (c, d
) composite columns are treated as a unit when the database produces the ROLLUP
results. In this example, the grouping sets returned are as follows: ((a, b), (c, d), e )
, ((a, b), (c, d))
, (a, b)
and ()
.
You can execute several ROLLUP
clauses within your SELECT
statement, as follows:
SELECT C1, COUNT(*) FROM T GROUP BY ROLLUP(a, b), ROLLUP(c, d);
This is equivalent to the following statement:
SELECT C1, COUNT(*) FROM T GROUP BY GROUPING SETS((a, b),(a),()), GROUPING SETS((c, d),(c), ());
Examples
Query the employees
table to select the first 10 departments and return the number of employees under each manager in each department. Use ROLLUP
to subtotal the number of employees in each department and return a grand total of all employees in the company.
Command> SELECT first 10 department_id AS Dept, > manager_id AS Mgr, > COUNT(employee_id) AS "Total emp" > FROM employees > GROUP BY ROLLUP(department_id, manager_id) > ORDER BY department_id, manager_id; DEPT, MGR, TOTAL EMP < 10, 101, 1 > < 10, <NULL>, 1 > < 20, 100, 1 > < 20, 201, 1 > < 20, <NULL>, 2 > < 30, 100, 1 > < 30, 114, 5 > < 30, <NULL>, 6 > < 40, 101, 1 > < 40, <NULL>, 1 > 10 rows found.
The following query returns the number of employees in each region, country and state or province. The rollup returns superaggregate rows for subtotals of all employees in each state or province and in each country and a grand total for all employees in the company. By combining the region and country as its own unit (within parentheses), the rollup does not return all employees for each region.
Command> SELECT region_name AS Region, > country_name AS Country, > state_province AS State, > COUNT(employee_id) AS "Total Emp" > FROM regions r, countries c, locations l, departments d, employees e > WHERE r.region_id = c.region_id > AND l.country_id = c.country_id > AND d.location_id = l.location_id > AND d.department_id = e.department_id > GROUP BY rollup((region_name, country_name), state_province) > ORDER BY region_name; REGION, COUNTRY, STATE, TOTAL EMP < Americas, Canada, Ontario, 2 > < Americas, United States of America, Texas, 5 > < Americas, United States of America, California, 45 > < Americas, United States of America, Washington, 18 > < Americas, Canada, <NULL>, 2 > < Americas, United States of America, <NULL>, 68 > < Europe, Germany, Bavaria, 1 > < Europe, United Kingdom, <NULL>, 1 > < Europe, United Kingdom, Oxford, 34 > < Europe, Germany, <NULL>, 1 > < Europe, United Kingdom, <NULL>, 35 > < <NULL>, <NULL>, <NULL>, 106 > 12 rows found.
The CUBE
clause groups the selected rows based on the values of all possible combinations of the grouping columns in the CUBE
clause. It returns a single row of summary information for each group. For example, three expressions (n
=3) in the CUBE
clause results in 2n = 23 = 8 groupings. Rows grouped on the values of n expressions are called regular rows; all others are called superaggregate rows. You can group using composite columns. For instance, a commonly requested CUBE
operation is for state sales subtotals on all combinations of month, state, and product sold.
For instance, if you specify GROUP BY CUBE(a, b, c)
, the resulting aggregate groupings generated are as follows: (a,b,c
), (a,b)
, (a,c
), (b,c
), a, b, c
, ( )
.
Example
To return the number of employees for each region and country, issue the following query.
Command> SELECT region_name AS Region, > country_name AS Country, > COUNT(employee_id) AS "Total Emp" > FROM regions r, countries c, locations l, departments d, employees e > WHERE r.region_id = c.region_id > AND l.country_id = c.country_id > AND d.location_id = l.location_id > AND d.department_id = e.department_id > GROUP BY CUBE(region_name, country_name) > ORDER BY region_name; REGION, COUNTRY, TOTAL EMP < Americas, Canada, 2 > < Americas, United States of America, 68 > < Americas, <NULL>, 70 > < Europe, Germany, 1 > < Europe, United Kingdom, 35 > < Europe, <NULL>, 36 > < <NULL>, Canada, 2 > < <NULL>, Germany, 1 > < <NULL>, United Kingdom, 35 > < <NULL>, United States of America, 68 > < <NULL>, <NULL>, 106 > 11 rows found.
The TRUNCATE TABLE
statement is similar to a DELETE
statement that deletes all rows. However, it is faster than DELETE
in most circumstances, as DELETE
removes each row individually.
No privilege is required for the table owner.
DELETE
for another user's table.
TRUNCATE TABLE [Owner.]TableName
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be truncated. |
TRUNCATE
is a DDL statement and thus is controlled by the DDLCommitBehavior
attribute. If DDLCommitBehavior
=0
(the default), then a commit is performed before and after execution of the TRUNCATE
statement. If DDLCommitBehavior
=1, then TRUNCATE
is part of a transaction and these transactional rules apply:
TRUNCATE
operations can be rolled back.
Subsequent INSERT
statements are not allowed in the same transaction as a TRUNCATE
statement.
Concurrent read committed read operations are allowed, and semantics of the reads are the same as for read committed reads in presence of DELETE
statements.
TRUNCATE
is allowed even when there are child tables. However, child tables need to be empty for TRUNCATE
to proceed. If any of the child tables have any rows in them, TimesTen returns an error indicating that a child table is not empty.
TRUNCATE
is not supported with any detail table of a materialized view, table that is a part of a cache group, or temporary table.
When a table contains out of line varying-length data, the performance of TRUNCATE TABLE
is similar to that of DELETE
statement that deletes all rows in a table. For more details on out-of line data, see "Numeric data types".
Where tables are being replicated, the TRUNCATE
statement replicates to the subscriber, even when no rows are operated upon.
When tables are being replicated with timestamp conflict checking enabled, conflicts are not reported.
DROP TABLE
and ALTER TABLE
operations cannot be used to change hash pages on uncommitted truncated tables.
To delete all the rows from the recreation.clubs
table, use:
TRUNCATE TABLE recreation.clubs;
ALTER TABLE
DROP TABLE
The UNLOAD CACHE GROUP
statement removes data from the cache group.
No privilege is required for the cache group owner.
UNLOAD CACHE GROUP
or UNLOAD ANY CACHE GROUP
for another user's cache group.
UNLOAD CACHE GROUP [Owner.]GroupName [WHERE ConditionalExpression] [COMMIT EVERY n ROWS]
or
UNLOAD CACHE GROUP [Owner.]GroupName WITH ID (ColumnValueList);
Parameter | Description |
---|---|
[ Owner .] GroupName |
Name assigned to the cache group. |
WHERE ConditionalExpression |
Use the WHERE clause to specify a search condition to qualify the target rows of the cache operation. If you use more than one table in the WHERE clause and the tables have columns with the same names, fully qualify the table names. |
COMMIT EVERY n ROWS |
Use the COMMIT EVERY n ROWS clause to indicate the frequency (based on the number of rows that are unloaded) at which a commit is issued during the unload operation.
If you specify this clause, the cache agent must be running and the unload must be the only operation in the transaction. Express To improve performance, use this clause when you are performing operations on cache groups that affect large amounts of data. Do not use this clause when you have cache groups with a small amount of data. |
WITH ID ColumnValueList |
The WITH ID clauses enables you to use primary key values to unload the cache instance. Specify ColumnValueList as either a list of literals or binding parameters to represent the primary key values. |
The UNLOAD
CACHE
GROUP
statement deletes rows from the TimesTen cache tables without affecting the data in the Oracle database tables.
If the cache group is replicated, an UNLOAD CACHE GROUP
statement deletes the entire contents of any replicated cache group as well.
Execution of the UNLOAD CACHE GROUP
statement for an AWT cache group waits until updates on the rows have been propagated to the Oracle database.
The UNLOAD CACHE GROUP
statement can be used for any type of cache group. For a description of cache group types, see "CREATE CACHE GROUP".
Use the UNLOAD CACHE GROUP
statement carefully with cache groups that have the AUTOREFRESH
attribute. A row that is unloaded can reappear in the cache group as the result of an autorefresh operation if the row or its child rows are updated in the Oracle database.
Following the execution of an UNLOAD CACHE GROUP
statement, the ODBC function SQLRowCount()
, the JDBC method getUpdateCount()
, and the OCI function OCIAttrGet()
with the OCI_ATTR_ROW_COUNT
argument return the number of cache instances that were unloaded.
If you specify the COMMIT
EVERY
n
ROWS
clause, the cache agent performs the unload operation and commits the transaction after unloading the data. Make sure the cache agent is up and running. If you do not specify the COMMIT
EVERY
n
ROWS
clause, the unload operation is executed by the application.
If you specify the COMMIT
EVERY
n
ROWS
clause, you cannot rollback the unload operation. If the unload operation fails when you specify the COMMIT
EVERY
n
ROWS
clause (where n
>= 0), the cache group could be in an inconsistent state since some unloaded rows are already committed. Therefore, some cache instances may be partially unloaded. If this occurs, unload the cache group again.
Use the WITH ID
clause to specify binding parameters.
The UNLOAD
CACHE
GROUP
operation is executed in its own transaction.
Do not reference child tables in the WHERE
clause.
Do not use the WITH ID
clause on read-only cache groups or user managed cache groups with the autorefresh attribute unless the cache group is a dynamic cache group.
Do not use the WITH ID
clause with the COMMIT EVERY
n
ROWS
clause.
Do not use the COMMIT
EVERY
n
ROWS
clause with global queries. If you set the GlobalProcessing
optimizer flag, and issue the unload operation with the COMMIT
EVERY
n
ROWS
clause, TimesTen throws an error. However, if you set the GlobalProcessing
optimizer flag and issue the unload operation without the COMMIT
EVERY
n
ROWS
clause, the unload operation is propagated among grid members.
Use the UNLOAD
CACHE
GROUP
... COMMIT
EVERY
n
ROWS
to unload data from cached tables. The cache agent unloads the data because the COMMIT
EVERY
n
ROWS
clause is used.
Command> UNLOAD CACHE GROUP testcache WHERE sampleuser.orders.order_id > 100 > COMMIT EVERY 100 ROWS; 2 cache instances affected.
CREATE
and UNLOAD
a cache group. The application performs the unload operation because the COMMIT
EVERY
n
ROWS
clause is not used.
CREATE CACHE GROUP recreation.cache FROM recreation.clubs ( clubname CHAR(15) NOT NULL, clubphone SMALLINT, activity CHAR(18), PRIMARY KEY(clubname)) WHERE (recreation.clubs.activity IS NOT NULL); UNLOAD CACHE GROUP recreation.cache;
ALTER CACHE GROUP
CREATE CACHE GROUP
DROP CACHE GROUP
FLUSH CACHE GROUP
LOAD CACHE GROUP
The UPDATE
statement updates the values of one or more columns in all rows of a table or in rows that satisfy a search condition.
No privilege is required for the table owner.
UPDATE
for another user's table.
UPDATE [hint] [FIRST NumRows] {[Owner.]TableName [CorrelationName]} SET {ColumnName = {Expression1 | NULL | DEFAULT}} [,...] [ WHERE SearchCondition ] RETURNING|RETURN Expression2[,...] INTO DataItem[,...]
Parameter | Description |
---|---|
hint |
Specifies a statement level optimizer hint for the UPDATE statement. For more information on hints, see "Statement level optimizer hints". |
FIRST NumRows |
Specifies the number of rows to update. FIRST NumRows is not supported in subquery statements. NumRows must be either a positive INTEGER value or a dynamic parameter placeholder. The syntax for a dynamic parameter placeholder is either ? or :DynamicParameter . The value of the dynamic parameter is supplied when the statement is executed. |
[ Owner .] TableName [ CorrelationName ] |
[ Owner .] TableName identifies the table to be updated.
All correlation names within one statement must be unique. |
SET ColumnName |
ColumnName specifies a column to be updated. You can update several columns of the same table with a single UPDATE statement. Primary key columns can be included in the list of columns to be updated as long as the values of the primary key columns are not changed. |
Expression 1 |
Any expression that does not contain an aggregate function. The expression is evaluated for each row qualifying for the update operation. The data type of the expression must be compatible with the data type of the updated column. Expression1 can specify a column or sequence CURRVAL or NEXTVAL reference when updating values. See "Incrementing SEQUENCE values with CURRVAL and NEXTVAL" for more details. |
NULL |
Puts a NULL value in the specified column of each row satisfying the WHERE clause. The column must allow NULL values. |
DEFAULT |
Specifies that the column should be updated with the default value. |
WHERE SearchCondition |
The search condition can contain a subquery. All rows for which the search condition is true are updated as specified in the SET clause. Rows that do not satisfy the search condition are not affected. If no rows satisfy the search condition, the table is not changed. |
Expression2 |
Valid expression syntax. See Chapter 3, "Expressions". |
DataItem |
Host variable or PL/SQL variable that stores the retrieved Expression2 value. |
If the WHERE
clause is omitted, all rows of the table are updated as specified by the SET
clause.
TimesTen generates a warning when a character or binary string is truncated during an UPDATE
operation.
A table on which a unique constraint is defined cannot be updated to contain duplicate rows.
The UPDATE
operation fails if it violates any foreign key constraint. See "CREATE TABLE" for a description of foreign key constraints.
Restrictions on the RETURNING
clause:
Each Expression2
must be a simple expression. Aggregate functions are not supported.
You cannot return a sequence number into an OUT
parameter.
ROWNUM
and subqueries cannot be used in the RETURNING
clause.
Parameters in the RETURNING
clause cannot be duplicated anywhere in the UPDATE
statement.
Using the RETURNING
clause to return multiple rows requires PL/SQL BULK COLLECT
functionality. See "FORALL and BULK COLLECT operations" in Oracle TimesTen In-Memory Database PL/SQL Developer's Guide.
In PL/SQL, you cannot use a RETURNING
clause with a WHERE CURRENT
operation.
Use the UPDATE
statement to update employees
with department_id
= 110. For employees
with department_id
= 110, update the manager_id
to the manager_id
of employees
with job_id
= 'FI_ACCOUNT
'. Use the DISTINCT
qualifier in the subquery of the SET
clause.
First find the manager_id
of employees
with job_id
= 'FI_ACCOUNT
.'
Command> SELECT manager_id FROM employees where job_id = 'FI_ACCOUNT'; < 108 > < 108 > < 108 > < 108 > < 108 > 5 rows found.
Next find the manager_id
of employees
with department_id
= 110.
Command> SELECT manager_id FROM employees WHERE department_id = 110; < 101 > < 205 > 2 rows found.
Now update the manager_id
of employees
with department_id
= 110. Use SELECT
DISTINCT
in the subquery of the SET
clause. After the UPDATE
, verify the manager_id
for employees
with department_id
= 110 was updated.
Command> UPDATE employees SET manager_id = > (SELECT DISTINCT employees.manager_id > FROM employees > WHERE employees.job_id = 'FI_ACCOUNT') > WHERE employees.department_id = 110; 2 rows updated. Command> SELECT manager_id FROM employees WHERE department_id = 110; < 108 > < 108 > 2 rows found.
Use subqueries in the SET
clause of the UPDATE
statement. Update employees
with location_id
= 1700 or location_id
= 2400. Set department_id
for these employees to the department_id
of location_id
= 2500. (This is department_id
80). Set salary for these employees to the maximum salary of their department.
First query the first 5 employees to check their department_id and salary.
Command> SELECT FIRST 5 employee_id, department_id, salary > FROM employees > ORDER BY employee_id, department_id, salary; < 100, 90, 24000 > < 101, 90, 17000 > < 102, 90, 17000 > < 103, 60, 9000 > < 104, 60, 6000 > 5 rows found.
Now use the UPDATE
statement to update employees.
Command> UPDATE employees e1 > SET department_id = > (SELECT department_id FROM departments > WHERE location_id = 2500), > salary = > (SELECT MAX(salary) FROM employees e2 > WHERE e1.department_id = e2.department_id) > WHERE department_id IN > (SELECT department_id FROM departments > WHERE location_id = 2400 OR location_id = 1700); 19 rows updated.
Query the first five employees again to check that employees with the original department_id
of 90 have been updated. The department_id
is now 80 and the salary is 24000.
Command> SELECT FIRST 5 employee_id, department_id, salary > FROM employees > ORDER BY employee_id, department_id, salary; < 100, 80, 24000 > < 101, 80, 24000 > < 102, 80, 24000 > < 103, 60, 9000 > < 104, 60, 6000 > 5 rows found.
The following example increases the price of parts costing more than $500 by 25 percent.
UPDATE purchasing.parts SET salesprice = salesprice * 1.25 WHERE salesprice > 500.00;
This next example updates the column with the NEXTVAL
value from sequence seq
.
UPDATE student SET studentno = seq.NEXTVAL WHERE name = 'Sally';
The following query updates the status of all the customers who have at least one unshipped order.
UPDATE customers SET customers.status = 'unshipped' WHERE customers.id = ANY (SELECT orders.custid FROM orders WHERE orders.status = 'unshipped');
The following statement updates all the duplicate orders, assuming id
is not a primary key.
UPDATE orders a set orders.status = 'shipped' WHERE EXISTS (SELECT 1 FROM orders b WHERE a.id = b.id AND a.rowid < b.rowid);
This next example updates job_id
, salary
and department_id
for an employee whose last name is'Jones'
in the employees
table. The values of salary
, last_name
and department_id
are returned into variables.
Command> VARIABLE bnd1 NUMBER(8,2); Command> VARIABLE bnd2 VARCHAR2(25) INLINE NOT NULL; Command> VARIABLE bnd3 NUMBER(4); Command> UPDATE employees SET job_id='SA_MAN', salary=salary+1000, > department_id=140 WHERE last_name='Jones' > RETURNING salary*0.25, last_name, department_id > INTO :bnd1, :bnd2, :bnd3; 1 row updated. Command> PRINT bnd1 bnd2 bnd3; BND1 : 950 BND2 : Jones BND3 : 140
TimesTen supports join update statements. A join update can be used to update one or more columns of a table using the result of a subquery.
UPDATE [Owner.]TableName SET ColumnName=Subquery [WHERE SearchCondition]
or
UPDATE [Owner.]TableName SET (ColumnName[,...])=Subquery [WHERE SearchCondition]
A join update statement has the following parameters:
Parameter | Description |
---|---|
[ Owner .] TableName |
Identifies the table to be updated. |
SET ( ColumnName [,...])= Subquery |
Specifies the column to be updated. You can update several columns of the same table with a single UPDATE statement. The SET clause can contain only one subquery, although this subquery can be nested.
The number of values in the select list of the subquery must be the same as the number of columns specified in the |
WHERE SearchCondition |
The search condition can contain a subquery. All rows for which the search condition is true are updated as specified in the SET clause. Rows that do not satisfy the search condition are not affected. If no rows satisfy the search condition, the table is not changed. |
The subquery in the SET
clause of a join update does not reduce the number of rows from the target table that are to be updated. The reduction must be specified using the WHERE
clause. Thus if a row from the target table qualifies the WHERE
clause but the subquery returns no rows for this row, this row is updated with a NULL
value in the updated column.
In this example, if a row from t1
has no match in t2
, then its x1
value in the first SELECT
and its x1
and y1
values in the second SELECT
are set to NULL
.
UPDATE t1 SET x1=(SELECT x2 FROM t2 WHERE id1=id2); UPDATE t1 SET (x1,y1)=(SELECT x2,y2 FROM t2 WHERE id1=id2);
In order to restrict the UPDATE
statement to update only rows from t1
that have a match in t2
, a WHERE
clause with a subquery has to be provided as follows.
UPDATE t1 SET x1=(SELECT x2 FROM t2 WHERE id1=id2) WHERE id1 IN (SELECT id2 FROM t2); UPDATE t1 SET (x1,y1)=(SELECT x2,y2 FROM t2 WHERE id1=id2) WHERE id1 IN (SELECT id2 FROM t2);