Skip Headers
Oracle® Retail Data Model Operations Guide
10g Release 2 (10.2)

Part Number E14480-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

5 Maintaining an Oracle Retail Data Model Warehouse

This chapter discusses how to refresh an Oracle Retail Data Model data warehouse. It includes the following topics:

Overview: Maintaining an Oracle Retail Data Model

You need to load your Oracle Retail Data Model data warehouse regularly so that it can serve its purpose of facilitating business analysis. To do this, data from one or more operational systems needs to be extracted and copied into the data warehouse. The challenge in data warehouse environments is to integrate, rearrange and consolidate large volumes of data over many systems, thereby providing a new unified information base for business intelligence.

The successive loads and transformations must be scheduled and processed in a specific order. Depending on the success or failure of the operation or parts of it, the result must be tracked and subsequent, alternative processes might be started.

The way you perform these incremental loads varies depending on whether you are maintaining relational tables and views, or OLAP cubes:

Maintaining Relational Tables and Views

Once you have implemented an Oracle Retail Data Model data warehouse as described in Chapter 3, "Populating the Oracle Retail Data Model Warehouse", you can administer the relational tables and views in the relational physical model in the same way you administer any other data warehouse.

You perform ETL on a scheduled basis to reflect changes made to the original source system. During this step, you physically insert the new, clean data into the production data warehouse schema, and take all of the other steps necessary (such as building indexes, validating constraints, taking backups) to make this new data available to the end users. Once all of this data has been loaded into the data warehouse, you update the relational materialized views to reflect the latest data.

When you have used OWB for the ETL processes implement the relational physical model, you can use the typical OWB process to perform periodic updates to the relational objects in your Oracle Retail Data Model data warehouse.

Tip:

Oracle Warehouse Builder User's Guide and Chapter 3, "Populating the Oracle Retail Data Model Warehouse."

Refreshing OLAP Cube Data

Since OLAP cubes do not use the typical ETL workflow process, you cannot use OWB to refresh OLAP cube data.

OLAP cubes are populated through SQL scripts that use the RBIA_OLAP_ETL_AW_LOAD package that is provided with the ORDM OLAP component. The subprograms in the RBIA_OLAP_ETL_AW_LOAD package support two modes of loading OLAP cubes: historical mode and incremental mode. Historical mode is used by the OLAP installation scripts provided with Oracle Retail Data Model. You use incremental mode to refresh OLAP cube data.

Note:

An incremental load of OLAP cubes does not recalculate the cubes that hold forecast data. To refresh forecast data, you also need to redesign the forecast process as described in "Updating Forecast Cubes".

Specifically, to refresh OLAP cubes, you:

  1. Write a script that calls the subprograms in RBIA_OLAP_ETL_AW_LOAD package in incremental mode. For an example, see "Sample Incremental Load".

    See:

    For detailed information about the RBIA_OLAP_ETL_AW_LOAD package and its subprograms, see Oracle Retail Data Model Reference.
  2. Execute this script on a regular basis.

Sample Incremental Load

The following code (executed from BIA_RTL_OLAP login) triggers an incremental load of the OLAP cubes.

SQL>
set serverout on size 1000000
set linesize 200
set pagesize 0
set timing on
exec cwm2_olap_manager.set_echo_on;
--DATA SETUP Incremental
UPDATE BIA_RTL.DWC_ETL_PARAMETER
SET
FROM_DATE_ETL = TO_DATE('21-JAN-2007', 'DD-MON-YYYY'),
TO_DATE_ETL = TO_DATE('21-JAN-2007', 'DD-MON-YYYY'),
LAST_UPDT_DT = SYSDATE,
LAST_UPDT_BY = USER
WHERE PROCESS_NAME = 'RBIA-INTRA-ETL-OLAP'
;
COMMIT;
-- Incremental BUILD
declare
aint integer :=100;
begin 
  aint := rbia_olap_etl_aw_load.olap_etl_aw_dimbuild('INCREMENTAL', 'EXECUTE', 2);
  aint := rbia_olap_etl_aw_load.olap_etl_aw_build('INCREMENTAL', 'EXECUTE', 2);
  
  if aint = 0 then
     dbms_output.put_line('Function call build successful');
  else 
     dbms_output.put_line('Function call build failed');
  end if;
  
end;
/

OLAP Incremental Load Recovery

When a load fails or is only partially successful, an error occurs. To correct the error, you will need to identify the type of load error and perform the steps necessary to correct it. The following sample scenarios provide examples of common errors and how to correct them.

Note:

The scenarios and examples in this section use the OLAP Demo (Sample schema) for Oracle Retail Data Model. The values of the analytic workspace internal partition names, start and end date boundaries for Business Months, and so forth, are dependent on the nature of the Time Calendar scripts installed with Oracle Retail Data Model sample schema installation and are explained from the perspective of the default Time Business Hierarchy which is week based.

A sample scenario where the user or developer needs to perform some clean up activity relates to a partial or failed load. The historical load is always performed on a fresh or empty analytic workspace and hence it can safely be restarted without bothering about the current state of the analytic workspace. One should note that running the Historical Load involves the loss of all data from the analytic workspace and as such it is typically run only once during the implementation life cycle.

The incremental load however runs in an incremental mode as a scheduled process with a definite frequency and only affects the cube for the time range that is being loaded. Hence if an incremental load fails or is partially successful, then there is a need to reload the same after validating the reason for failure.

Cleaning up a partial load is dependent on the load situation and the extent of failure. The sample scenario can illustrate the steps to be taken to re-run the load when it fails or is loaded partially as the following scenarios describe:

When the incremental load fails completely and none of the records have loaded successfully

This situation can occur because of missing dimension information. Assuming that missing dimension information has subsequently been made available:

  • Possible error due to invalid dimension or foreign key in fact table.

  • All Loads usually load dimensions first and then attempt the Facts. Hence the dimension information is missing from the dimension tables and is present in the Fact table. You need to correct the dimension information and include the missing dimension value (wait for the right file, wait for regular intra etl load completion, etc.) and then attempt the load.

When the incremental load is partly successful, but some records fail to load due to invalid dimension information

Assuming that missing dimension information has subsequently been made available:

  • Possible error due to invalid dimension or foreign key in fact table.

  • Since dimension information has been made available, the incremental load can be attempted once again and the data should get loaded once again. In this case, the successful records would be reloaded once again into the cube. The failed records would get loaded successfully this time.

When the incremental load successful, but additional fact information needs to be loaded

In this case, since the data being loaded is additional (extra records have come in and none of the earlier or existing records are invalid), the incremental load can be attempted once again and the data should get loaded once again.

In this case, the successful records would be reloaded once again into the cube. The additional records would get processed this time and succeed or fail determined by the validity of the data.

When the incremental load is successful, but an alternate set of data records must be loaded

In this case, the earlier set of records loaded into the cube for the incremental load was from an invalid source (wrong file for example) and needs to be undone. This can be due to a single record or multiple records being in error. You need to:

  • Undo the effect of the incremental load performed.

  • Deleting the faulty records in the source data and reloading the correct set of records will not result in the cube containing only the valid records. The cube will contain all records loaded earlier as well as the current (correct set of) records.

  • Data already loaded into the cube remains in its place unless explicitly deleted from the cube.

  • To perform the delete action in the cube, execute OLAP DML commands to selectively (partially) clean up the cube of earlier records and to reload the data.

  • Using the time range of the incremental data load as the Time dimension boundaries, perform the clean up in the cube.

  • For example, assume that 5,000 records have been incrementally loaded into the Sales cube has loaded incrementally for a single day's data (21-JAN-2007), but now you fine that the original data file was invalid. You need to incrementally load a new file of 4,000 records for 21-JAN-2007. (These 4000 records can contain a mixture of modified records from earlier file, new records as well as having earlier records missing entirely from this file). To perform this action, clean up the cube for 21-JAN-2007 and reload the data once again in incremental mode as illustrated in Example 5-1, "Cleaning up and reloading a cube".

Example 5-1 Cleaning up and reloading a cube

Assume that you have populated the PSLSINV analytic workspace using the sample schema data. In Oracle Retail Data Model OLAP analytic workspace PSLSINV, the Sales cube is a cube that is partitioned at the Business Month level for Time dimension, and, when in an Oracle Database 11g, has been defined as a cube with compressed composites (11g) and is partitioned at the Business Month level for Time dimension (both 10g and 11g). Hence there is a separate variable that stores all the data of Sales cube for each Business Month. The data for day 21-JAN-2007 resides in the business month level partition which corresponds to Month with the value of BSNS_MO_20061225. The partition template for Sales cube (that is, OOS_SALES_PARTITION_TEMPLATE), the partition contains the DAY member with the value of DAY_20070121.

The solution to reload DAY_20070121 by clearing up previously loaded data is two fold:

  1. In the Analytic Workspace Manager, attach the PSLSINV analytic workspace in read-write mode, and, then, issue the following OLAP DML commands to clean up the data in Sales cube for partition that contains values dimensioned by DAY_20070121. The actual commands vary depending on the Oracle Database release you are using:

    • In Oracle Database 10g, issue the following commands.

      CLEAR ALL FROM OOS_SALES_PRT_TOPVAR(partition p82)
      UPDATE PSLSINV
      COMMIT
      
    • In an Oracle Database 11g, issue the following commands.

      CLEAR ALL FROM OOS_SALES_STORED(partition p246)
      UPDATE PSLSINV
      COMMIT
      
  2. Reload the data in Sales cube for the time range 25-DEC-2006 and 21-JAN-2007 which corresponds to the Time dimension boundary for the partition which has been purged of all data.

    Login as BIA_RTL_OLAP, then in SQL*Plus issue your commands. The actual commands you issue varies depending on the Oracle Database release you are using:

    • In Oracle Database 10g, issue the following commands.

      set serverout on size 1000000set linesize 200set pagesize 0set timing onexec cwm2_olap_manager.set_echo_on;
      --DATA SETUP for incremental load of partition p82 in OOS_SALES cube
      UPDATE BIA_RTL.DWC_ETL_PARAMETER
      SET 
                      FROM_DATE_ETL = TO_DATE('25-DEC-2006', 'DD-MON-YYYY'),
                      TO_DATE_ETL = TO_DATE('21-JAN-2007', 'DD-MON-YYYY'),
                      LAST_UPDT_DT = SYSDATE,
                      LAST_UPDT_BY = USER
      WHERE PROCESS_NAME = 'RBIA-INTRA-ETL-OLAP'
      ;
      COMMIT;
      -- Incremental BUILD for cube: OOS_SALES
      declare
      aint integer :=100;
      begin 
        
        aint := rbia_olap_etl_aw_load.olap_etl_aw_reset_views('Incremental');
        if aint = 0 then
           dbms_output.put_line('Function call resetviews successful');
           aint := rbia_olap_etl_aw_load.olap_etl_aw_dimbuild('INCREMENTAL', 'EXECUTE', 2);
                      
           aint := rbia_olap_etl_aw_load.olap_etl_aw_cubebuild('OOS_SALES.CUBE', 'INCREMENTAL', 'EXECUTE', 2);
           if aint = 0 then
               dbms_output.put_line('Function call cubebuild successful');
           else 
               dbms_output.put_line('Function call cubebuild failed');
            end if;
          else      dbms_output.put_line('Function call resetviews failed');  end if;  end;/*******************
      
    • In Oracle Database 11g, issue the following commands.

      set serverout on size 1000000
      set linesize 200
      set pagesize 0
      set timing on
      
      --DATA SETUP for incremental load of partition p246 in OOS_SALES cube
      
      UPDATE BIA_RTL.DWC_ETL_PARAMETER
      SET 
                      FROM_DATE_ETL = TO_DATE('25-DEC-2006', 'DD-MON-YYYY'),
                      TO_DATE_ETL = TO_DATE('21-JAN-2007', 'DD-MON-YYYY'),
                      LAST_UPDT_DT = SYSDATE,
                      LAST_UPDT_BY = USER
      WHERE PROCESS_NAME = 'RBIA-INTRA-ETL-OLAP'
      ;
      COMMIT;
      
      -- Incremental BUILD for cube: OOS_SALES
      declare
      aint integer :=100;
      begin 
        
        aint := rbia_olap_etl_aw_load.olap_etl_aw_reset_views('INCREMENTAL');
      
        if aint = 0 then
           dbms_output.put_line('Function call resetviews successful');
           aint := rbia_olap_etl_aw_load.olap_etl_aw_dimbuild('INCREMENTAL', 'EXECUTE', 2);
           aint := rbia_olap_etl_aw_load.olap_etl_aw_cubebuild('OOS_SALES', 'INCREMENTAL', 'EXECUTE', 2);
           if aint = 0 then
               dbms_output.put_line('Function call cubebuild successful');
           else 
               dbms_output.put_line('Function call cubebuild failed');
            end if;
        
        else 
           dbms_output.put_line('Function call resetviews failed');
      
        end if;
        
      end;
      /
      
      

Updating Forecast Cubes

During the initial load of the OLAP cubes, the default Forecasting programs (OLAP DML FORECAST_STOCK_SALES and FORECAST_STOCK_INV) populate the Sales Forecast and Inventory Forecast cubes using data from two historical years to forecast one year into the future. To refresh the values in the forecast cubes on an incremental basis then you need to write an incremental load script that executes those programs

When creating forecasts to run intermittently, the main points to consider are the frequency with which you want to run the forecasts and the duration of future Time periods over which you want to forecast. Because forecasting depends on this "yearly" data, typically there is no need to refresh the data in the Sales Forecast and Inventory Forecast cubes as frequently as you refresh the Sales and Inventory cubes. For example, you could schedule the forecasts to execute every month and use the same forecasting Time periods as those used by the historical load script. In this case, you could decide to have the intermittent forecasts overwrite the data in the forecast cubes. On the other hand, you could forecast more frequently, in which case, to avoid overlapping forecasts, you could create new measures to hold the intermittent forecasts-- or even create entirely new forecasts as described in "Creating a New Forecast".

Note:

If you want to replace the data Sales Forecast and Inventory Forecast cubes that was generated during the initial load of the OLAP cubes with new data, you must perform another historical or initial load of the OLAP cubes.

Creating a New Forecast

To create a new forecast:

  1. Create a new measure to hold the result of the forecast as described in Oracle OLAP Application Developer's Guide

  2. Open the PSLSINV analytic workspace in the Analytic Workspace Manager.

    See:

    Oracle OLAP Application Developer's Guide
  3. Within the Analytic Workspace Manager, open the Analytic Workspace Worksheet.

  4. Create a new OLAP DML forecast program in one of the following ways:

    • Create an OLAP DML forecasting program that uses the single OLAP DML FORECAST command to perform the forecast. In this case, you can use the forecast programs that are delivered with Oracle Retail Data Model as templates. These programs are named FORECAST_STOCK_SALES and FORECAST_STOCK_INV and reside in the PSLSINV analytic workspace.

      See:

      FORECAST command in Oracle OLAP DML Reference
    • Write a new OLAP DML forecasting program with the OLAP DML commands that use a forecasting program.

      See:

      The discussion on writing a forecasting program in Oracle OLAP DML Reference
  5. Populate the new forecast by executing the new forecasting program using the OLAP DML CALL command.

    Tip:

    You can embed OLAP DML commands in a SQL program using the DBMS_AW package.
  6. (Optional) Integrate the new forecast program into Oracle Retail Data Model OLAP cube load process.