Oracle® Retail Data Model Operations Guide 10g Release 2 (10.2) Part Number E14480-02 |
|
|
PDF · Mobi · ePub |
This chapter describes how Extract, Transform, and Load (ETL) operations populate Oracle Retail Data Model with data.
This chapter discusses the following topics:
Once you have performed ETL operations to implement an Oracle Retail Data Model data warehouse, you need to update with new data from your OLTP system. The process for doing this is discussed in Chapter 5, "Maintaining an Oracle Retail Data Model Warehouse".
Note:
The instructions in this chapter assume that after doing the fit-gap analysis described in "Performing Fit-Gap Analysis", you have not identified or made any changes to the Oracle Retail Data Model logical or physical model. If you have made changes, you need to modify the ETL accordingly.In the Oracle Retail Data Model relational model, reference and lookup tables store master, reference, and dimensional data; while base, derived, and aggregate tables store transaction and fact data at different granularities. Base tables store the transaction data at the lowest level of granularity, while derived and aggregate tables store consolidated and summary transaction data.
As with any data warehouse, you use Extract, Transform, and Load (ETL) operations to populate an Oracle Retail Data Model data warehouse. You perform ETL operations as three separate steps using three different types of ETL:
Source-ETL processes that extract data from the source On-Line Transaction Processing (OTLP) system, transform that data, and loads the reference, lookup, and base tables Oracle Retail Data Model warehouse. Source-ETL is not provided with Oracle Retail Data Model. You must write source-ETL processes yourself. For information about creating source-ETL, see "Populating Reference, Lookup, and Base Relational Tables".
Intra-ETL processes that populate the remaining Oracle Retail Data Model warehouse relational data structures. Intra-ETL does not access the OLTP data at all. All of the data that intra-ETL extracts and transforms is located within the Oracle Retail Data Model warehouse. Intra-ETL is provided with the Oracle Retail Data Model and is executed in the following order:
Intra-ETL that populates the derived and aggregate tables and materialized views with data from the base, reference, and lookup tables. For information about using this intra-ETL, see "Populating Derived and Aggregate Relational Tables and Views".
Intra-ETL that populates the tables used for the data mining models. For more information on using this intra-ETL, see "Implementing Oracle Retail Data Model Data Mining Models"
SQL scripts that populate the OLAP cubes provided with Oracle Retail Data Model. These scripts define the OLAP cubes and populate these cubes with data extracted from the Oracle Retail Data Model relational tables and views. For more information on populating OLAP cubes in a Oracle Retail Data Model warehouse, see "Populating OLAP Cubes".
You populate the reference, lookup, and base tables with data from the source On-Line Transaction Processing (OTLP) applications using source-ETL.
Source-ETL is not provided with Oracle Retail Data Model. You must design and write the source-ETL processes yourself. When writing these ETL processes, populate the tables in the following order:
See:
Appendix A, "Operations Scripts" for information about scripts provided with Oracle Retail Data Model that can help you implement your physical data model. For example, "Lookup Value Population Scripts" provides instructions for using two SQL scripts provided with Oracle Retail Data Model to seed values into physical lookup tables.One component of Oracle Retail Data Model is a database package named PKG_INTRA_ETL_PROCESS
which is a complete Intra-ETL process composed of Intra-ETL scripts operations that populate the derived and aggregate tables and relational materialized views with the data from the base, reference, and lookup table. This package respects the dependency of each individual program. It executes the programs in the proper order.
See also:
The Intra-ETL scripts are discussed in detail in Oracle Retail Data Model Reference.There are two categories of Intra-ETL scripts:
Derived Population - These are a set of database packages that populate the derived tables based on the content of the base, reference, and lookup tables. Derived tables are implemented using Oracle tables.
Aggregate Population - These are a set of database packages that populate the aggregate tables that are mainly implemented as materialized views based on the content of the previously populated Oracle tables.
Note:
The Intra-ETL scripts provided with Oracle Retail Data Model assume that there is no data in the derived tables and aggregate tables and views. Typically, you perform this type of load only when you first create your data warehouse. Later, you need to add additional data to the tables and refresh your views. In this case, you perform an incremental load as described in "Maintaining Relational Tables and Views".Using the Intra-ETL involves the following tasks:
There are two ways that you can execute the Intra-ETL packages provided with Oracle Retail Data Model. The method you use depends on whether you answered "yes" or "no" to the question "Indicate if this installation will be used to store transaction level history" when you installed Oracle Retail Data Model:
If you selected "yes" during installation, then Level0 is MV and you can execute the Intra-ETL using Oracle Warehouse Builder as discussed in "Executing the Intra-ETL in Oracle Warehouse Builder".
If you selected "no" during installation, then Level0 is Table and you must explicate execute the Intra-ETL package as described in "Explicitly Executing the Intra-ETL Package".
You can install Oracle Retail Data Model Intra-ETL as a project in Oracle Warehouse Builder (OWB). Once installed, you can execute the intra-ETL from OWB.
See:
For information about Oracle Warehouse Builder, see Oracle Warehouse Builder User's Guide.To use Oracle Retail Data Model Intra-ETL in Oracle Warehouse Builder (OWB), follow these steps:
This installation requires Oracle Warehouse Builder 10.2.0.1.0.
Follow these steps to import the ORDM_INTRA_ETL project:
Log in as a repository user to the to the OWB design center where you want to import ORDM_INTRA_ETL. In this example, the repository user is RBIA_ETL.
Click the Design menu, and select Import.
Click the submenu Warehouse Builder Metadata. The Metadata Import window opens.
Enter the name and location of the exported metadata loader file (.mdl
), Intra_ETL_OWB_ProcessFlow.mdl
in the folder ORACLE_HOME
/ORDM/ORDM/PDM/ Relational/Intra_ETL/OWB
.
Enter a log file name and location. The log file enables monitoring of import operations.
Don't change any other options.
Click Import.
After the import finishes, ORDM_INTRA_ETL appears in the OWB Design Center in the Project Explorer.
Follow these steps to configure the imported ORDM_INTRA_ETL project:
Create a metadata location in OWB for which the corresponding schema is the database schema where all Oracle Retail Data Model-related objects are available and installed. For example, create a metadata location named ETL_DEMO_LOC under the ETL_DEMO schema where all Oracle Retail Data Model-related objects are available.
Right-click the data module ORDM_DERIVE_AGGREGATE, and select Open Editor.
Change the Metadata Location and the Data Location of the data module of the imported project to the location defined in the first step. In this example, change the location to ETL_DEMO_LOC.
Right-click the data module ORDM_DERIVE_AGGREGATE and select Configure.
In the Identification property, change Location and Streams Administrator location to the location created in the first step. In this example, change both items to ETL_DEMO_LOC.
Create an Oracle Workflow location in OWB in a workflow schema. For example, create the Oracle Workflow location OWF_ET_DEMO_LOC in the OWF_MGR schema.
Right-click the process flow module (in this example, ORDM_INTRA_ETL), and select Open Editor.
In the editor, change the Data Location of the process flow module to the new location created in step 6. In this example, change the Data Location for ORDM_INTRA_ETL to OWF_ET_DEMO_LOC.
Right-click the process flow module, and select Configure. In this example, right-click ORDM_INTRA_ETL.
In the Configuration Properties,
In Execution, change Evaluation Location to the new location created in step 6.
In Identification, change Location to the new location created in step 6.
In this example, change both values to OWF_ET_DEMO_LOC.
Log in to Design Center as OWB owner user. In Global Explorer -> Security -> Users, right- click Users and click new. Select the two users corresponding to the new locations. Click Next and then Finish.
Save the project.
You are now almost ready to execute the project. Before you can execute the project, ensure that the repository user (in this example, RBIA_ETL) has the EXECUTE privilege for the following packages:
PKG_AGGREGATE_ALL
PKG_DWD_CTLG_RQST_BY_DAY
PKG_DWD_POS_RTL
PKG_DWD_POS_CNTRL
PKG_DWD_POS_STORE_FINCL
PKG_DWD_CUST_EMP_RLTNSHP_DAY
PKG_DWD_INV_POSN_BY_ITEM_DAY
PKG_DWD_CERTIFICATE_ACTVTY_TRX
PKG_DWD_CUST_ORDR_LI_STATE
PKG_DWD_RTV_ITEM_DAY
PKG_DWD_CUST_ORDR_ITEM_DAY
PKG_DWD_CUST_SKU_SL_RETRN_DAY
PKG_DWD_INV_ADJ_BY_ITEM_DAY
PKG_DWD_INV_UNAVL_BY_ITEM_DAY
PKG_DWD_SPACE_UTLZTN_ITEM_DAY
PKG_DWD_POS_TNDR_FLOW
PKG_DWD_RTL_SL_RETRN_ITEM_DAY
PKG_INTRA_ETL_UTIL
These packages are listed on the Transformation node of the data module. In this example, they are listed on the Transformation node of ORDM_DERIVE_AGGREGATE.
Follow these steps to deploy and execute the main process flow:
Go to the Control Center Manager.
Select the Oracle Workflow location that was created in Configure the ORDM_INTRA_ETL Project. In this example, the location is OWF_ET_DEMO_LOC.
Select the main process flow RBIA_INTRA_ETL_FLW. Right-click and select set action. If this is the first deployment, set action to Create; for deployment after the first, set action to Replace. Deploy the process flow.
After the deployment finishes successfully, RBIA_INTRA_ETL_FLW is ready to execute.
Keep the following in mind:
Insure that you specify the date ranges in the DWC_ETL_PARAMETER
table for PROCESS_NAME 'RBIA-INTRA-ETL'
before you trigger the ETL process.
Insure that time partitions for the load period have been created:
To generate partition for L0 MV, use ORACLE_HOME
/ORDM/PDM/Relational/Intra_ETL/L0_MV/generate_add_partition.sql
To generate partition for L0 Tables, use ORACLE_HOME
/ORDM/PDM/Relational/Intra_ETL/L0_Table/generate_add_partition.sql
Oracle Retail Data Model provides you with a PKG_INTRA_ETL_PROCESS.RUN
procedure which starts the Oracle Retail Data Model Intra-ETL process. This procedure can be invoked manually, by another process such as Source-ETL, or according to a predefined schedule such as Oracle Job Scheduling.
PKG_INTRA_ETL_PROCESS.RUN
does not accept parameters. This procedure calls other programs in the correct order to load the data for current day (according to the Oracle system date). The result of each table loading are tracked in DWC_
control tables.
Keep the following in mind:
Insure that you specify the date ranges in the DWC_ETL_PARAMETER
table for PROCESS_NAME 'RBIA-INTRA-ETL'
before you trigger the ETL process.
Insure that time partitions for the load period have been created:
To generate partition for L0 MV, use ORACLE_HOME
/ORDM/PDM/Relational/Intra_ETL/L0_MV/generate_add_partition.sql
To generate partition for L0 Tables, use ORACLE_HOME
/ORDM/PDM/Relational/Intra_ETL/L0_Table/generate_add_partition.sql
Two control tables, DWC_INTRA_ETL_PROCESS
and DWC_INTRA_ETL_ACTIVITY
, monitor the execution of the Intra-ETL process.
Table 3-1 contains column information for DWC_INTRA_ETL_PROCESS.
Table 3-2 contains column information for DWC_INTRA_ETL_ACTIVITY.
Table 3-1 DWC_INTRA_ETL_PROCESS Columns
Column Name | Data Type and Size | Not NULL? | Remarks |
---|---|---|---|
PROCESS_KEY |
NUMBER(30) |
Yes |
Primary Key. System Generated Unique Identifier |
PROCESS_START_TIME |
DATE |
Yes |
ETL Process Start Date and Time |
PROCESS_END_TIME |
DATE |
ETL Process End Date and Time |
|
PROCESS_STATUS |
VARCHAR2(30) |
Yes |
Current Status of the process |
FROM_DATE_ETL |
DATE |
Start Date (ETL) - From Date of the ETL date range |
|
TO_DATE_ETL |
DATE |
End Date (ETL) - To Date of the ETL date range |
|
LOAD_DT |
DATE |
Record Load Date - Audit Field |
|
LAST_UPDT_DT |
DATE |
Last Update Date and Time - Audit Field |
|
LAST_UPDT_BY |
VARCHAR2(30) |
Last Update By - Audit Field |
Table 3-2 DWC_INTRA_ETL_ACTIVITY Columns
Column Name | Data Type and Size | Not NULL? | Remarks |
---|---|---|---|
ACTIVITY_KEY |
NUMBER(30) |
Yes |
Primary Key. System Generated Unique Identifier |
PROCESS_KEY |
NUMBER(30) |
Yes |
Process Key. FK to DWC_INTRA_ETL_PROCESS table |
ACTIVITY_NAME |
VARCHAR2(50) |
Yes |
Activity Name or Intra ETL Program Name |
ACTIVITY_DESC |
VARCHAR2(500) |
Activity Description |
|
ACTIVITY_START_TIME |
DATE |
Yes |
Intra-ETL Program Execution Start Time |
ACTIVITY_END_TIME |
DATE |
Intra-ETL Program Execution End Time |
|
ACTIVITY_STATUS |
VACHAR2(30) |
Yes |
Current Status of the individual program |
ERROR_DTL |
VARCHAR2(2000) |
Error details, if any |
|
LOAD_DT |
DATE |
Record Load Date - Audit Field |
|
LAST_UPDT_DT |
DATE |
Last Update Date and Time - Audit Field |
|
LAST_UPDT_BY |
VARCHAR2(30) |
Last Update By - Audit Field |
At the top level, the complete Intra-ETL process is divided into two groups: 1) Derived Population and, 2) Aggregate Population. The programs are executed in that order (that is, Aggregate Population is invoked only when all of the individual programs that make up the Derived Population complete with a status of COMPLETED-SUCCESS).
Each normal run (as opposed to an error-recovery run) of a separate Intra-ETL execution performs the following steps:
Inserts a record into table DWC_INTRA_ETL_PROCESS
with SYSDATE (a unique, monotonically-increasing, system-generated process key) as process start time, RUNNING
as the process status, and input date range in the fields FROM_DATE_ETL
and TO_DATE_ETL
.
Invokes each of the individual Intra-ETL programs in the appropriate order of dependency. Before the invocation of each program, the procedure inserts a record into the Intra-ETL Activity detail table DWC_INTRA_ETL_ACTIVITY
with a system generated unique activity key, the process key value corresponding to the Intra-ETL process, individual program name as the Activity Name
, a suitable activity description, SYSDATE
as activity start time, RUNNING
as the activity status.
Updates the corresponding record in the DWC_INTRA_ETL_ACTIVITY
table for the activity end time and activity status after the completion of each individual ETL program (either successfully or with errors. For successful completion of the activity, the procedure updates the status as COMPLETED-SUCCESS
. If an error occurs, the procedure updates the activity status as COMPLETED-ERROR
, and also updates the corresponding error detail in the ERROR_DTL
column.
In case of an error, all of the DML operations performed as part of the individual ETL programs are rolled back and the corresponding error details are updated into the activity table record, along with the status indicating that an error occurred. Within a group, an error in one program does not stop the execution of the other subsequent independent programs; only the programs which are dependent of the program which has encountered error are skipped. However, as explained earlier, since Aggregate Population will only start after the successful completion of the entire Derived Population, an error in any individual program in the Derived Population group results in the skipping of the execution of the Aggregate Population set of programs and MV refresh
Updates the record corresponding to the process in the DWC_INTRA_ETL_PROCESS
table for the process end time and status, after the completion of all individual intra-ETL programs. If all the individual programs succeed, the procedure updates the status to COMPLETED-SUCCESS, otherwise it updates the status to COMPLETED-ERROR
.
You can monitor the execution state of the Intra-ETL, including current process progress, time taken by individual programs, or the complete process, by viewing the contents of the DWC_INTRA_ETL_PROCESS
and DWC_INTRA_ETL_ACTIVITY
tables corresponding to the maximum process key. This can done both during and after the execution of the Intra-ETL procedure.
When an error occurs, the corresponding error details are tracked against the individual programs in the DWC_INTRA_ETL_ACTIVITY
table.
In order to restart the Intra-ETL operation, you must:
Check and correct (either for data correction or database-related action) each of these errors (which may be related to data or database size).
Invoke the Intra-ETL process as an error-recovery run using the PKG_INTRA_ETL_PROCESS.RECOVERYRUN
procedure instead of the database package used for a normal run.
The PKG_INTRA_ETL_PROCESS.RECOVERYRUN
procedure identifies the programs that failed during the previous execution based on the content of the DWC_INTRA_ETL_ACTIVITY
table, and will execute only those programs as a part of the recovery run. In the case of Derived Population error as a part of the previous run, this recovery run executes the individual derived population programs which produced errors in the previous run. After their successful completion, the run executes the Aggregate Population programs and materialized view refresh in the appropriate order. In this way, the Intra-ETL error recovery is almost transparent, without involving the Data Warehouse or ETL administrator. The administrator only needs to take correct the causes of the errors and re-invoke the Intra-ETL process once more. The Intra-ETL process identifies and executes the programs that generated errors.
Oracle Retail Data Model provides an optional data mining component. This data mining component extends the core extends the core functionality of Oracle Retail Data Model by adding data mining models.
The tables used by the data mining models are defined and populated by Intra-ETL process provided with Oracle Retail Data Model.
The the steps required to implement the data mining models are:
When you install the Data Mining component of Oracle Retail Data Model, the data mining ETL packages are also installed. The Mining ETL packages have names of the form PKG_POP_DM_*
and are listed in Oracle Retail Data Model Reference.
You use these Mining ETL packages to populate data from base or derived tables in the bia_rtl
schema to tables named *_SRC
in the bia_rtl_mining
schema. These *_SRC
contain source input data for the data mining models.
To populate the *_SRC tables:
Ensure that the corresponding base and derived tables in the bia_rtl
schema are populated.
Note:
For testing purposes, you can create populated tables by importing the*_SRC
tables from bia_rtl_mining.dmp.zip
which is installed during the Oracle Retail Data Model install option 3 "Sample Schema and Reports." The zip file is located in ORACLE_HOME/ORDM/Sample/Schema_Dump
.Execute the PKG_POP_DM_*.LOADDATA(
p_yearmonth
)
procedures.
The parameter p_yearmonth
is the Business Month that you want to analyze. All Business Months are stored in bia_rtl.DWR_BSNS_MO
. The input of p_yearmonth
must be in bia_rtl.DWR_BSNS_MO.MO_KEY
.
Note:
The programs that load the data also perform required data preparation.Example 3-1 Populating Source input Data Tables for the Data Mining Models
This example populates data for Business Year 2007 Month 2:
Log in to the database as the BIA_RTL_MINING user:
$ sqlplus bia_rtl_mining/bia_rtl_mining
Execute packages to load data from base and derived table in the bia_rtl
schema to the mining source tables in the bia_rtl_mining
schema:
--Populate ASSOCIATE_BASKET_SRC SQL>exec pkg_pop_dm_assbas.loaddata(20070219); --Populate ASSOCIATE_LOSS_SRC SQL>exec pkg_pop_dm_assloss.loaddata(20070219); --Populate ASSOCIATE_SALES_SRC SQL>exec pkg_pop_dm_asssls.loaddata(20070219); --Populate STR_CTGRY_DTLS_SRC SQL>exec pkg_pop_dm_custcatgmix.loaddata(20070219); --Populate CUSTOMER_LOYALTY_SRC SQL>exec pkg_pop_dm_custlty.loaddata(20070219); --Populate FS_CATEGORY_MIX_SRC SQL>exec pkg_pop_dm_fscatgmix.loaddata(20070219); --Populate ITEM_BASKET_SRC SQL>exec pkg_pop_dm_itmbas.loaddata(20070219); --PopulateI ITEM_POS_LOSS_SRC SQL>exec pkg_pop_dm_itmposloss.loaddata(20070219); --Populate for POS_FLOW_SRC SQL>exec pkg_pop_dm_posflow.loaddata(20070219); --Popule STORE_LOSS_SRC SQL>exec pkg_pop_dm_strloss.loaddata(20070219);
Check source table to see if data was loaded:
select 'ASSOCIATE_BASKET_SRC',count(*) from ASSOCIATE_BASKET_SRC where month = 'BY 2007 M2' union select 'ASSOCIATE_LOSS_SRC ',count(*) from ASSOCIATE_LOSS_SRC where month = 'BY 2007 M2' union select 'ASSOCIATE_SALES_SRC ',count(*) from ASSOCIATE_SALES_SRC where month = 'BY 2007 M2' union select 'STR_CTGRY_DTLS_SRC ',count(*) from STR_CTGRY_DTLS_SRC where month = 'BY 2007 M2' union select 'CUSTOMER_LOYALTY_SRC',count(*) from CUSTOMER_LOYALTY_SRC where month = 'BY 2007 M2' union select 'FS_CATEGORY_MIX_SRC ',count(*) from FS_CATEGORY_MIX_SRC where month = 'BY 2007 M2' union select 'ITEM_BASKET_SRC ',count(*) from ITEM_BASKET_SRC where month = 'BY 2007 M2' union select 'ITEM_POS_LOSS_SRC ',count(*) from ITEM_POS_LOSS_SRC where month = 'BY 2007 M2' union select 'POS_FLOW_SRC ',count(*) from POS_FLOW_SRC where month = 'BY 2007 M2' union select 'STORE_LOSS_SRC ',count(*) from STORE_LOSS_SRC where month = 'BY 2007 M2' ;
Check the result to ensure that the source tables for which you will create mining model contains data.
Oracle Retail Data Model creates mining models using the following three Oracle Data Mining algorithms: Adaptive Bayes (ABN), Decision Tree (DT), and Apriori Association (APASS). These algorithms all build models that have rules.
Note:
In Oracle Data Mining, "Association Rules" is abbreviated as AR.The mining models are defined and populated using packages that are named PKG_RBIW_DM_*
provided with Oracle Retail Data Model. Each package (analysis) builds models using one or two of the algorithms. The models built depend on the analysis being performed. The output of the model build is a view containing rules generated by the model.
To create a data mining model:
Log in to the database as the BIA_RTL_MINING
user.
Execute the appropriate PKG_RBIW_DM_*
package with the appropriate build procedure to create the mining model of the model type(s) you want.
See:
For a complete list of the data mining model creation packages, the build procedures and build parameters for creating different models of different types, see "Data Mining Component ETL" in Oracle Retail Data Model Reference.Check that RBIW_DM_MODEL_SIGN
and RBIW_DM_RULES
were populated with new rules with the following query.
Check that a model-specific view was created with the new rules for the specific model you created.
See:
For a complete list of these views, see "Physical Data Model of the Data Mining Component" in Oracle Retail Data Model Reference.Example 3-2 Creating a Data Mining Model
Before you build models, ensure that the *_SRC
tables are populated. For testing purposes, you can create populated tables by importing the *_SRC
tables from bia_rtl_mining.dmp.zip
. The zip file bia_rtl_mining.dmp.zip
is installed during the Oracle Retail Data Model install option 3 "Sample Schema and Reports." The zip file is located in the directory
ORACLE_HOME/ORDM/Sample/Schema_Dump
These steps show how to run the Mining Model Creation packages. In this example, we create an Associate Basket Analysis model for Business Year 2007 Month 2.
Log in to the database as the BIA_RTL_MINING user:
$ sqlplus bia_rtl_mining/bia_rtl_mining
Execute packages to create the Associate Basket Analysis Model for the two model types ABN and DT:
--For the model type Adaptive Bayes Network (ABN) SQL> set serveroutput on size 1000000 spool ASSBAS_ABN.txt BEGIN PKG_RBIW_DM_ASSBAS.PRC_RUNALL_ABN('ASSOCIATE_BASKET_SRC', 'ASSBAS_MDL_ABN', 2007, 'BY 2007 M2', TRUE ); END; / spool off --For the model type Decision Tree (DT) SQL> set serveroutput on size 1000000 spool ASSBAS_DT.txt BEGIN PKG_RBIW_DM_ASSBAS.PRC_RUNALL_DT ( 'ASSOCIATE_BASKET_SRC', 'ASSBAS_MDL_DT', 2007, 'BY 2007 M2', TRUE ); END; / spool off
Check that RBIW_DM_MODEL_SIGN and RBIW_DM_RULES were populated with new rules with the following query:
SQL>Select 'SIGN' TYPE, analysis_desc, model_type, count(*) COUNT from RBIW_DM_MODEL_SIGN where analysis_name = 'ASSOCIATE_BASKET' group by analysis_desc, model_type union Select 'RULES' TYPE, analysis_desc, model_type, count(*) COUNT from RBIW_DM_RULES where analysis_name = 'ASSOCIATE_BASKET' group by analysis_desc, model_type order by 2,3;
The query should return information like the following:
TYPE | ANALYSIS_DESC | MODEL_TYPE | COUNT |
---|---|---|---|
RULES | Associate Basket Analysis | ABN | 61 |
SIGN | Associate Basket Analysis | ABN | 18 |
RULES | Associate Basket Analysis | DT | 76 |
SIGN | Associate Basket Analysis | DT | 39 |
The actual COUNT values may be different from the ones shown; however, if the model was created successfully, the COUNT should always be greater than 0.
The OLAP component is an option of Oracle Retail Data Model. This OLAP component extends the core functionality of Oracle Retail Data Model by adding OLAP cubes for OLAP analysis and forecasting.
Oracle Retail Data Model OLAP cubes are not populated using a formal Extract, Transform, and Load workflow process. Instead, OLAP cubes are populated through SQL scripts that use the RBIA_OLAP_ETL_AW_LOAD
package that is provided with the OLAP component.
OLAP cubes are populated at the following times:
During the initial load of the OLAP cubes.
This load is performed by a SQL script (sometimes called the "OLAP cube initial load script") that is delivered with the Oracle Retail Data Model OLAP component. The actual script that performs the OLAP cube initial load varies depending on Oracle Database release:
For Oracle Database 10g, the script is ordm_olap_install_scr.sql
For Oracle Database 11g, the script is ordm_olap_11g_install_scr.sql
When the relational data exists in the Oracle Retail Data Model data warehouse, the OLAP cube initial load script loads relational table data (from a specified start date to the present time) into the OLAP cubes. It also performs the default OLAP forecasts. (For detailed information about the behavior of the OLAP cube initial load script, see the discussion on the OLAP component installation scripts in Oracle Retail Data Model Reference.)
You can execute the OLAP cube initial load SQL script in the following ways:
Implicitly, by installing the Oracle Retail Data Model OLAP component after you have loaded data into the Oracle Retail Data Model relational tables. For instructions on how to install the Oracle Retail Data Model OLAP component, see Oracle Retail Data Model Installation Guide.
Explicitly, after you install the Oracle Retail Data Model OLAP component and populate the relational tables. In this case, you execute the OLAP cube initial load SQL program as you execute any other SQL program.
On a scheduled basis to update the OLAP cube data with the relational data that has been added to the Oracle Retail Data Model data warehouse since the initial load of the OLAP cubes.
This type of load (sometimes referred to as an "intermittent" or "refresh" load) merely adds relational data from a specified time period to the data in the Sales and Inventory cubes; it does not change the data in the Sales Forecast and Inventory Forecast cubes.
Oracle Retail Data Model does not provide an OLAP intermittent load cube script. You must write your own OLAP intermittent load cube script using the RBIA_OLAP_ETL_AW_LOAD
package. For information on writing your own intermittent OLAP cube program and for updating the data in the OLAP forecast cubes, see "Refreshing OLAP Cube Data".