Oracle® In-Memory Database Cache User's Guide 11g Release 2 (11.2.2) E21634-08 |
|
|
PDF · Mobi · ePub |
The following sections contain information about cache performance.
Note:
See Oracle TimesTen In-Memory Database Troubleshooting Guide for extensive information about monitoring autorefresh operations and improving autorefresh performance. See "Monitoring autorefresh cache groups" and "Poor autorefresh performance".
Oracle TimesTen In-Memory Database Troubleshooting Guide also has information about AWT cache group performance. See "Monitoring AWT performance" and "Possible causes of poor AWT performance".
Improving performance when using incremental autorefresh for read-only cache groups
Improving performance when reclaiming memory during autorefresh operations
Caching the same Oracle table on two or more TimesTen databases
Dynamic loading based on a primary key search of the root table has faster performance than primary key searches on a child table or foreign key searches on a child table.
By default, an AWT cache group uses the PL/SQL execution method to apply changes within TimesTen to the Oracle database. AWT bundles all pending operations into a single PL/SQL collection that is sent to the Oracle database server to be executed. This execution method is appropriate when there are mixed transactions and network latency between TimesTen and the Oracle database server.
Use the CacheAWTMethod
first connection attribute to specify SQL array execution to apply changes within TimesTen to the Oracle database. This method is appropriate when the same type of operation is repeated. For example, SQL array execution is very efficient when a user does an update that affects several rows of the table. Updates are grouped together and sent to the Oracle database server in one batch.
The PL/SQL execution method transparently falls back to SQL array execution mode temporarily when it encounters one of the following:
A statement that is over 32761 bytes in length.
A statement that references a column of type BINARY FLOAT
, BINARY DOUBLE
and VARCHAR/VARBINARY
of length greater than 4000 bytes.
Note:
You can also set this value with thettDBConfig
built-in procedure with the CacheAwtMethod
parameter. For details, see "ttDBConfig" in the Oracle TimesTen In-Memory Database Reference.For more information, see "CacheAWTMethod" in Oracle TimesTen In-Memory Database Reference.
At certain times, you may execute large transactions, such as for the end of the month, the end of a quarter, or the end of the year transactions. You may also have situations where you modify or add a large amount of data in the Oracle database over a short period of time. For incremental autorefresh, read-only cache groups, TimesTen could potentially run out of permanent space when an autorefresh operation applies either one of these cases. Therefore, for these situations, you can configure an autorefresh transaction limit, where the large amount of data is broken up, applied, and committed over several smaller transactions.
The ttCacheAutorefreshXactLimit
built-in procedure enables you to direct autorefresh to commit after executing a specific number of operations. This option applies to all incremental autorefresh read-only cache groups that are configured with the same autorefresh interval.
Since the single transaction is broken up into several smaller transactions, transactional consistency cannot be maintained while autorefresh is in progress. Once the autorefresh cycle completes, the data is transactionally consistent. To protect instance consistency, we recommend that you set the autorefresh transaction limit only on cache groups with only a single table, since instance consistency between the parent and child tables is not guaranteed. When the autorefresh transaction limit is turned on, TimesTen does not enforce the foreign key relationship that protects instance consistency. Once you turn off the autorefresh transaction limit for incremental autorefresh read-only cache groups, both instance and transactional consistency are maintained again.
Note:
If you are using an active standby pair, you must execute thettCacheAutorefreshXactLimit
built-in procedure for the same values on both the active and standby masters.Note:
For more information, such as the syntax and the returned result set, see "ttCacheAutorefreshXactLimit" in the Oracle TimesTen In-Memory Database Replication Guide.For the month end processing, there can be a large number updates in a single transaction for the Oracle tables that are cached in autorefresh cache groups. In order to ensure that the large transaction does not fill up permanent memory, you can enable autorefresh to commit after every 256 (or any other user specified number) operations with the ttCacheAutorefreshXactLimit
built-in procedure.
Turn on an autorefresh transaction limit for incremental autorefresh read-only cache groups before a large transaction with the ttCacheAutorefreshXactLimit
built-in procedure where the value
is set to ON
or to a specific number of operations. Then, when autorefresh finishes updating the cached tables in TimesTen, turn off the autorefresh transaction limit for incremental autorefresh read-only cache groups with the ttCacheAutorefreshXactLimit
built-in procedure.
The following example sets up the transaction limit to commit after every 256 operations for all incremental autorefresh read-only cache groups that are defined with an interval value of 10 seconds.
call ttCacheAutorefreshXactLimit('10000', 'ON');
After the month end process has completed and the incremental autorefresh read-only cache groups are refreshed, disable the transaction limit for incremental autorefresh read-only cache groups that are defined with the interval value of 10 seconds.
call ttCacheAutorefreshXactLimit('10000', 'OFF');
To enable the transaction limit for incremental autorefresh read-only cache groups to commit after every 1024 operations, provide 1024 as the value as follows:
call ttCacheAutorefreshXactLimit('10000', '1024');
The following example uses the employee and departments table, where the department id of the department table is a foreign key that points to the department id of the employee table.
The following example creates two incremental autorefresh read-only cache groups, where each is in its own cache group. The autorefresh transaction limit is enabled with ttCacheAutorefreshXactLimit
before a large transaction and is disabled after it completes.
Before you initiate the large transaction, invoke ttCacheAutorefreshXactLimit
to set the interval value and the number of operations after which to automatically commit. The following sets the number of operations to three (which is intentionally low to show a brief example) for all incremental autorefresh read-only cache groups with a two second interval.
CALL ttCacheAutorefreshXactLimit('2000', '3'); < 2000, 3 > 1 row found.
Create the incremental autorefresh read-only cache groups with interval of two seconds. This example creates two static (non-dynamic) read-only cache groups, where each contains a single table.
CREATE READONLY CACHE GROUP cgDepts AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS FROM departments ( department_id NUMBER(4) PRIMARY KEY , department_name VARCHAR2(30) NOT NULL , manager_id NUMBER(6) , location_id NUMBER(4) ); CREATE READONLY CACHE GROUP cgEmpls AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS FROM employees ( employee_id NUMBER(6) PRIMARY KEY , first_name VARCHAR2(20) , last_name VARCHAR2(25) NOT NULL , email VARCHAR2(25) NOT NULL UNIQUE , phone_number VARCHAR2(20) , hire_date DATE NOT NULL , job_id VARCHAR2(10) NOT NULL , salary NUMBER(8,2) , commission_pct NUMBER(2,2) , manager_id NUMBER(6) , department_id NUMBER(4) );
Perform a manual LOAD CACHE GROUP
for both autorefresh cache groups.
LOAD CACHE GROUP cgDepts COMMIT EVERY 256 ROWS; 27 cache instances affected. LOAD CACHE GROUP cgEmpls COMMIT EVERY 256 ROWS; 107 cache instances affected.
You can have inconsistency within the table during an autorefresh as shown with the employees table.
On TimesTen, select the minimum and maximum salary of all employees.
SELECT MIN(salary), MAX(salary) FROM employees; < 2100, 24000 > 1 row found.
On the Oracle database, add 100,000 to everyone's salary.
UPDATE employees SET salary = salary + 100000; 107 rows updated.
On TimesTen, when you perform the SELECT
again (while the autorefresh transactions are commmitted after every 3 records), it shows that while the maximum salary has updated, the minimum salary is still the old value.
SELECT MIN(salary), MAX(salary) FROM employees; < 2100, 124000 > 1 row found.
However, once the autorefresh completes, transactional consistency is maintained. For this example, once the autorefresh process completes, all salaries have increased by 100,000.
SELECT MIN(salary), MAX(salary) FROM employees; < 102100, 124000 > 1 row found.
The large transaction is complete, so disable the transaction limit for autorefresh cache groups with a 2 second interval.
call ttCacheAutorefreshXactLimit('2000', 'OFF');
You can have transactional inconsistency between cache groups if you perform a SQL statement while the autorefresh process is progressing. The following SELECT
statement example executes against the employees and department table in the cgDepts
autorefresh cache group. With this example, since the foreign key is not enforced on TimesTen and the autorefresh process applies several transactions, the employee table updates may be inserted before the department updates.
In addition, all of the updates for both tables in the cache group are not applied until the autorefresh cycle has completed. In the following example, the SELECT
statement is executed before the autorefresh process is complete. Thus, the results do not show all of the expected data, such as the department name and several employees (some of the lawyers in the legal department 1000) are missing.
SELECT e.department_id, d.DEPARTMENT_NAME, e.FIRST_NAME, e.LAST_NAME FROM employees e, departments d WHERE e.DEPARTMENT_ID = d.DEPARTMENT_ID (+) AND e.department_id >= 1000 ORDER BY 1,2,3,4; < 1000, <NULL>, Alan, Dershowitz > < 1000, <NULL>, F. Lee, Bailey > < 1000, <NULL>, Johnnie, Cochran > 3 rows found.
However, after the autorefresh process completes, transactional consistency is maintained. The following shows the same SELECT
statement executed after the autorefresh is complete. All expected data, the department information and all of the new lawyers, are updated.
SELECT e.department_id, d.DEPARTMENT_NAME, e.FIRST_NAME, e.LAST_NAME FROM employees e, departments d WHERE e.DEPARTMENT_ID = d.DEPARTMENT_ID (+) AND e.department_id >= 1000 ORDER BY 1,2,3,4; < 1000, Legal, Alan, Dershowitz > < 1000, Legal, Barry, Scheck > < 1000, Legal, F. Lee, Bailey > < 1000, Legal, Johnnie, Cochran > < 1000, Legal, Robert, Kardashian > < 1000, Legal, Robert, Shapiro > 6 rows found.
For autorefresh cache groups that have more than one table, you can also experience transactional inconsistency if you execute SQL statements while the autorefresh process is in progress.
Initiate the transaction limit for incremental autorefresh cache groups of 2 seconds with the ttCacheAutorefreshXactLimit
built-in procedure and create a single autorefresh cache group with two tables: the employees and departments tables.
CALL ttCacheAutorefreshXactLimit('2000', '3'); < 2000, 3 > 1 row found. CREATE READONLY CACHE GROUP cgDeptEmpls AUTOREFRESH MODE INCREMENTAL INTERVAL 2 SECONDS FROM departments ( department_id NUMBER(4) PRIMARY KEY , department_name VARCHAR2(30) NOT NULL , manager_id NUMBER(6) , location_id NUMBER(4) ) , employees ( employee_id NUMBER(6) PRIMARY KEY , first_name VARCHAR2(20) , last_name VARCHAR2(25) NOT NULL , email VARCHAR2(25) NOT NULL UNIQUE , phone_number VARCHAR2(20) , hire_date DATE NOT NULL , job_id VARCHAR2(10) NOT NULL , salary NUMBER(8,2) , commission_pct NUMBER(2,2) , manager_id NUMBER(6) , department_id NUMBER(4) , foreign key(department_id) references departments(department_id) );
Manually load the cache group.
LOAD CACHE GROUP cgDeptEmpls COMMIT EVERY 256 ROWS; 27 cache instances affected.
Perform a SELECT
statement on TimesTen that uploads all of the legal department data.
SELECT e.department_id, d.department_name, count(*) FROM employees e, departments d WHERE e.department_id = d.department_id (+) GROUP BY e.department_id, d.department_name ORDER BY 1 desc; < 110, Accounting, 2 > < 100, Finance, 6 > < 90, Executive, 3 > < 80, Sales, 34 > < 70, Public Relations, 1 > < 60, IT, 5 > < 50, Shipping, 45 > < 40, Human Resources, 1 > < 30, Purchasing, 6 > < 20, Marketing, 2 > < 10, Administration, 1 > 11 rows found.
On Oracle, insert a new legal department, numbered 1000, with 6 new lawyers in both the employee and department tables.
When performing a SELECT
statement on TimesTen during the autorefresh process, only data on two of the lawyers in department 1000 have been uploaded into TimesTen.
SELECT e.department_id, d.department_name, count(*) FROM employees e, departments d WHERE e.department_id = d.department_id (+) GROUP BY e.department_id, d.department_name ORDER BY 1 desc; < 1000, Legal, 2 > < 110, Accounting, 2 > < 100, Finance, 6 > < 90, Executive, 3 > < 80, Sales, 34 > < 70, Public Relations, 1 > < 60, IT, 5 > < 50, Shipping, 45 > < 40, Human Resources, 1 > < 30, Purchasing, 6 > < 20, Marketing, 2 > < 10, Administration, 1 > 12 rows found.
However, after the autorefresh process completes, all 6 employees (lawyers) in the legal department have been uploaded to TimesTen. Now, it is transactionally consistent.
SELECT e.department_id, d.department_name, COUNT(*) FROM employees e, departments d WHERE e.department_id = d.department_id (+) GROUP BY e.department_id, d.department_name ORDER BY 1 desc; < 1000, Legal, 6 > < 110, Accounting, 2 > < 100, Finance, 6 > < 90, Executive, 3 > < 80, Sales, 34 > < 70, Public Relations, 1 > < 60, IT, 5 > < 50, Shipping, 45 > < 40, Human Resources, 1 > < 30, Purchasing, 6 > < 20, Marketing, 2 > < 10, Administration, 1 > 12 rows found.
The large transaction is complete, so disable the transaction limit for autorefresh cache groups with a 2 second interval.
call ttCacheAutorefreshXactLimit('2000', 'OFF');
To see how a autorefresh transaction limit for a particular autorefresh interval is performing, you can retrieve statistics for the last 10 incremental autorefresh transactions for this autorefresh interval with the ttCacheAutorefIntervalStatsGet
built-in procedure. See "Retrieving statistics on autorefresh transactions" for more information.
To facilitate incremental autorefresh for read-only cache groups, TimesTen executes a table join query on both the Oracle database base table and its corresponding change log table to retrieve the incremental changes. However, if both tables are very large, the join query can be slow. In addition, if the Oracle database base table is continuously updated while the join-query is executing, you may receive the ORA-01555
“Snapshot too old” error from a long-running autorefresh query.
To avoid this situation, you can configure incremental autorefresh with a select limit, which joins the Oracle database base table with a limited number of rows from the autorefresh change log table. You can configure a select limit with the ttCacheAutorefreshSelectLimit
built-in procedure.
Autorefresh continues to apply changes to the cached table incrementally until all the rows in the autorefresh change log table have been applied. When there are no rows left to apply, the autorefresh thread sleeps for the rest of the interval period.
Note:
For details on the syntax, parameters, result set, and restrictions, see "ttCacheAutorefreshSelectLimit" in the Oracle TimesTen In-Memory Database Reference.For example, before a large transaction, you can execute the ttCacheAutorefreshSelectLimit
built-in procedure to set a select limit to 1000 rows for incremental autorefresh cache groups with an interval value of 10 seconds. The following example sets the value
to ON
.
Command> call ttCacheAutorefreshSelectLimit('10000', 'ON'); < 10000, ON > 1 row found.
The following example set a select limit to 2000 rows for incremental autorefresh cache groups with an interval value of 7 seconds.
Command> call ttCacheAutorefreshSelectLimit('7000', '2000'); < 7000, 2000 > 1 row found.
You can disable any select limit for incremental autorefresh cache groups with an interval value of 10 seconds by setting the value
to OFF
.
Command> call ttCacheAutorefreshSelectLimit('10000', 'OFF'); < 10000, OFF > 1 row found.
To see how a select limit for a particular autorefresh interval is performing, you can retrieve statistics for incremental autorefresh transactions for this autorefresh interval. See "Retrieving statistics on autorefresh transactions" for more information.
To determine the interval for a cache group, use ttIsql
and run the cachegroups
command:
> cachegroups cgowner.cgname;
This returns all attributes for the cgowner.cgname
cache group including the interval.
To determine which intervals have a select limit, you can run the following query on the Oracle database where <cacheAdminUser>
is the cache administrator, <hostName>
is the host name of the machine where the TimesTen database is located, <databaseFileName>
is the database path taken from the DataStore
attribute, and substitute the version number (such as 06) for the xx
.
SELECT * FROM <cacheAdminUser>.tt_xx_arinterval_params WHERE param='AutorefreshSelectEveryN' AND host='<hostName>' AND database like '%<databaseFileName>%' ORDER BY arinterval;
For example, if the cache administrator user name is pat
, the host name is myhost
, the database file name is myTtDb
, and 06 is substituted for xx
that is the TimesTen minor release number then:
SELECT * FROM pat.tt_06_arinterval_params WHERE param='AutorefreshSelectEveryN' AND host='myhost' AND database like '%myTtDb%' ORDER BY arinterval;
The interval is stored in milliseconds.
To see how a select limit for a particular autorefresh interval is performing, you can retrieve statistics for incremental autorefresh transactions for this autorefresh interval with the ttCacheAutorefIntervalStatsGet
built-in procedure. See "Retrieving statistics on autorefresh transactions" for more information.
As described "Transaction reclaim operations" in the Oracle TimesTen In-Memory Database Operations Guide, TimesTen resource cleanup occurs during the reclaim phase of a transaction commit. To improve performance, a number of transaction log records are cached in memory to reduce the need to access the transaction log on disk in the the commit buffer. However, TimesTen must access the transaction log on disk if the transaction is larger than the reclaim buffer.
When you are using autorefresh for your cache groups, the cache agent has its own reclaim buffer to manage the transactions that are committed within autorefresh operations. If the cache agent reclaim buffer is too small, the commit operations during autorefresh can take longer than expected as it must access the transaction log on disk. To avoid any performance issues, you can configure a larger reclaim buffer for the cache agent so that the cache agent can handle larger transactions in memory at reclaim time.
When using an active standy pair replication scheme to replicate autorefresh operations, the replication agent applies the same autorefresh operations as part of the replication. Thus, the replication agents on both the active and standby nodes have their own reclaim buffers that should be configured to be the same size or greater than the cache agent reclaim buffer.
The ttDbConfig
built-in procedure provides the following parameters for setting the maximum size for the reclaim buffers for both the cache agent and the replication agent. (The memory for the reclaim buffers are allocated out of temporary memory.)
CacheAgentCommitBufSize
sets the maximum size for the reclaim buffer for the cache agent.
RepAgentCommitBufSize
sets the maximum size for the reclaim buffer for the replication agent. You should configure the maximum size for the reclaim buffer on both the active and standby nodes. It is recommended that you set the size for the reclaim buffers to the same value on both nodes, but not required.
To determine if you should increment the size for the cache agent reclaim buffer, evaluate the CommitBufMaxReached
and CommitBufNumOverflows
statistics provided by the ttCacheAutorefIntervalStatsGet
built-in procedure. For more details, see "Retrieving statistics on autorefresh transactions."
Execute the ttCacheAutorefIntervalStatsGet
built-in procedure for statistical information about the last 10 autorefresh cycles for a particular autorefresh interval defined for an incremental autorefresh read-only cache group.
Note:
For more information on syntax and the returned result set for this built-in procedure, see "ttCacheAutorefIntervalStatsGet" in the Oracle TimesTen In-Memory Database Reference.This built-in procedure is useful if you have set an transaction limit or a select limit for incremental, autorefresh read-only cache groups. See "Improving execution of large transactions when using incremental autorefresh for read-only cache groups" and "Improving performance when using incremental autorefresh for read-only cache groups" for details.
The following example shows how to execute ttCacheAutorefIntervalStatsGet
built-in procedure to retrieve statistics for incremental autorefresh read-only cache groups that have been defined as static and have the interval of 2 seconds:
Command> call ttCacheAutorefIntervalStatsGet(2000, 1); < 2000, 1, 21, 2013-04-30 06:05:38.000000, 100, 3761, 3761, 822, 1048576, 1280, 0, 58825, 63825, 13590, 0, 0, 0, 0, 0 > < 2000, 1, 20, 2013-04-30 06:05:37.000000, 100, 85, 85, 18, 1048576, 1280, 0, 55064, 60064, 12768, 0, 0, 0, 0, 0 > < 2000, 1, 19, 2013-04-30 06:05:32.000000, 100, 3043, 3043, 666, 1048576, 1280, 0, 54979, 59979, 12750, 0, 0, 0, 0, 0 > < 2000, 1, 18, 2013-04-30 06:05:30.000000, 100, 344, 344, 74, 1048576, 1280, 0, 51936, 56936, 12084, 0, 0, 0, 0, 0 > < 2000, 1, 17, 2013-04-30 06:05:28.000000, 100, 1826, 1826, 382, 1048576, 1280, 0, 51592, 56592, 12010, 0, 0, 0, 0, 0 > < 2000, 1, 16, 2013-04-30 06:05:26.000000, 100, 55, 55, 12, 1048576, 1280, 0, 49766, 54766, 11628, 0, 0, 0, 0, 0 > < 2000, 1, 15, 2013-04-30 06:05:22.000000, 100, 2901, 2901, 634, 1048576, 1280, 0, 49711, 54711, 11616, 0, 0, 0, 0, 0 > < 2000, 1, 14, 2013-04-30 06:05:21.000000, 100, 55, 55, 12, 1048576, 1280, 0, 46810, 51810, 10982, 0, 0, 0, 0, 0 > < 2000, 1, 13, 2013-04-30 06:05:10.000000, 100, 5844, 5844, 1263, 1048576, 1280, 0, 46755, 51755, 10970, 0, 0, 0, 0, 0 > < 2000, 1, 12, 2013-04-30 06:05:08.000000, 100, 607, 607, 132, 1048576, 1280, 0, 40911, 45911, 9707, 0, 0, 0, 0, 0 > 10 rows found.
For each cache administration user, TimesTen creates a change log table and trigger (as part of what is created to manage caching) in the Oracle database for each cache table in the cache group. A trigger is fired for each committed insert, update, or delete operation on the cached Oracle Database table; the action is logged in the change log table.
If you cache the same Oracle database table in a cache group on two different TimesTen databases, we recommend that you use the same cache administration user name on both TimesTen databases as the owner of the cache table on each TimesTen database. When you use the same cache administration user, only one trigger and change log table are created to manage the changes to the base table. Thus, it is efficient and does not slow down the application.
If you create separate cache administration users on each TimesTen database to own the cache group that caches the same Oracle table, then separate triggers and change log tables exist on the Oracle database for the same table: one for each cache administration user. For example, if you have two separate TimesTen databases, each with their own cache administration user, two triggers fire for each DML operation on the base table, each of which are stored in a separate change log table. Firing two triggers and managing the separate change log tables can slow down the application.
The only reason to create separate cache administration users is if one of the TimesTen databases that caches the same table has a slow autorefresh rate or a slow connection to the Oracle database. In this case, having a single cache administration user on both TimesTen databases slows down the application on the faster connection, as it waits for the updates to be propagated to the slower database.